Daily Digest
For those with FOMO. Never miss a headline and be the first to spot the next big thing among the top 10 products each day.
PRODUCT HIGHLIGHT
June 28th, 2024
Intercom’s latest AI launch aims to free up more time to focus on the important stuffIntercom’s latest AI launch aims to free up more time to focus on the important stuff

What a week it has been for launches. We’ve seen big announcements from the likes of Anthropic, Figma, Superhuman, and Notion and the week’s not over just yet. 

Intercom, the company behind the support widget that pops up on sites all over the web, hopped on the shipping train and announced two new features designed to help you and your team save time and serve your users more efficiently. Let’s dive in. 

A new Copilot: Arguably, the biggest announcement comes from Intercom opening their Copilot, Fin AI, to the general public after a few months of closed beta. Fin AI is the company’s flagship AI tool. It exists to act as an AI assistant for every support agent you might have. 

When a user asks a question, Fin will pull relevant information from different sources and format it into a potential answer. From there, you can choose to accept that as an answer and send it on to the user. The goal is to free up employee time to better focus on things that matter the most, like forging a deeper relationship with users. 

Knowledge Hub:If you’re going to employ an AI agent for support queries, you will need somewhere to pull information from. In Intercom’s world, that place is the Knowledge Hub. Unlike other AI assistants, Fin’s information is curated by your team rather than the internet, which can lead to hallucinations.

With the knowledge hub, you can centralize, manage, control, and optimize all the information that powers AI, agents, and self-serve support in one place.

PRODUCT HIGHLIGHT
June 27th, 2024
Here’s everything announced at Figma’s Config conferenceHere’s everything announced at Figma’s Config conference

It’s that time of year again when designers worldwide flock to an event center in San Francisco. I’m talking about Config, the annual event where the darling design platform Figma shows off what it’s been working on. 

Unsurprisingly, AI is still the star of the show, but the company also announced some other updates, including a Google Slides competitor and a new, refreshed look for the platform. 

Let’s start with the design overhaul. According to the announcement, it’s been redesigned from the ground up to “focus the canvas less on our UI and more on your work.” Users will notice a new toolbar, new navigation, rounded corners across the UI, and some 200+ refreshed icons. 

Moving on from the redesign, the team also announced Figma Slides, probably one of its most practical features yet. As the name suggests, Figma Slides is a Google Slides-style presentation feature. There are a few Figma-orientated features built to delight designers, like tweaking your deck designs in real time instead of jumping between a slide and its corresponding frame. 

You’ll also be able to present that app prototype you spent hours perfecting right from your deck, meaning you won’t have to create a whole screen recording just to show your team your vision for how every piece connects.

Remember how I said AI was still the star of the show? The headline feature yesterday was arguably Figma AI, a suite of generative AI features that are built to help you design and iterate quickly. Like Framer’s AI option, Figma AI lets you generate an entire prototype for web, mobile, and anything in between with a prompt. 

Alongside that, Figma AI comes with features built to speed up smaller tasks like generating placeholder text and an “AI-enhanced” asset search function.

PRODUCT HIGHLIGHT
June 26th, 2024
Notion’s new feature allows you to create sites out of your documentsNotion’s new feature allows you to create sites out of your documents

Ever since Notion appeared on the scene back in 2016, people have been using it for all sorts: from organizing their day to building internal documentation, creating and selling templates, and even creating entire sites.

That last one has become a popular niche among the Notion community since the company allowed pages to be made public. A handful of products have been built around the concept, and now Notion is joining the party by launching Notion Sites. 

As the name suggests, Notion Sites is the company’s solution for those who want to build and host a site using their Notion documents. It is essentially an expansion of the existing publishing feature but with a number of bells and whistles tacked on. 

When you publish your Notion document as a site, you’ll now be able to do several things to make it truly yours, like adding a custom favicon, configuring it with your domain, and adding a navigation bar with links and breadcrumbs. Alongside those features, your new site also has analytics and basic SEO features like meta titles and descriptions. 

One thing the team stayed away from for now is adding too many customization options. It’s not a site-builder like Webflow or Framer, and at the moment, it’s not meant to be. You can theme your site to light or dark mode or respond to system settings, but that’s about as far as it goes. There’s no animation engine, custom CSS, hamburger icons, etc. Over time, the team plans to add more functionality, according to product lead Matt Piccolella.

If you want to take Notion Sites for a test drive, it's available now across the platform. Log in to your profile, swipe through the onboarding, and create a document to turn into a site.

PRODUCT HIGHLIGHT
June 25th, 2024
This new AI chatbot aims to be an AI-powered reflection of yourselfThis new AI chatbot aims to be an AI-powered reflection of yourself

AI is rapidly transforming our personal lives with automated tools and innovations, such as AI girlfriends, therapists, travel guides, and even AI social networks.

Dot fits somewhere into this genre. It’s a new AI app built by the team at New Computer. It’s not a therapist or a girlfriend — it’s more of a friend, a companion, and a confidant who strives to learn as much as possible about you. This allows it to give advice and complete tasks beyond the typical responses of other chatbots.

When you first boot up Dot, you’ll go through an onboarding process. Your new companion will ask “getting to know you” questions, like “What do you do for work?” “What’s your favorite hobby?” or “How do you like to wind down in the evening?”

Once you answer, the AI will ask deeper questions. I mentioned that I love cooking, leading Dot to ask, “What’s your earliest memory of home-cooked food?” You can tell Dot to move on, but initially, it will ask deeper questions to get to know you.

The more questions you answer, the more you, Dot, becomes. It’s not meant to replace human connection but to help you learn more about yourself. “It’s meant to be a tool for self-introspection, accountability, personal growth — but not to replace human relationships,” co-founder Samantha Whitmore told TechCrunch.

So what’s it good for? My immediate use case was as a journal. It’s good for externalizing thoughts and feelings, and since the AI gets to know you, it can sympathize better than other chatbots. Its follow-up questions genuinely lead to introspection.

If you want to try out an AI version of yourself, Dot is available on the iOS app store.

PRODUCT HIGHLIGHT
June 24th, 2024
Anthropic’s latest model is giving GPT-4o and others a run for their moneyAnthropic’s latest model is giving GPT-4o and others a run for their money

The AI arms race is continuing at breakneck speed. Anthropic, the AI company that netted a whopping $2.75 billion in funding from Amazon earlier this year, just launched its latest model, Claude 3.5 Sonnet, out of nowhere. 

Sonnet takes up the middle position in the company’s model lineup behind Opus, its highest-end model, and ahead of Haiku, its smallest model. (Is it just me, or is keeping up with all the different names for AI models starting to get a little confusing?) Interestingly, the company claims Sonnet—the middle offering— now outperforms Opus, its largest offering — until Opus gets its own 3.5 update, of course.

What’s the big deal? Like all new releases, the hype comes from how it performs against others, especially OpenAI’s models. According to Anthropic, Sonnet 3.5 matches and beats GPT-4o and Google’s Gemini across various tasks. When you look at the benchmarks, it does look pretty impressive. It outscored GPT-4o, Gemini, and Meta’s Llama 3 400B in seven of the nine overall benchmarks and four of the five vision benchmarks. Of course, benchmarks should be taken with a grain of salt since companies can pick and choose based on what looks good. 

Alongside Sonnet 3.5, Anthropic also released a new feature called Artifacts. With Artifacts, you can now interact more directly with the results from your Claude-based query. Say you generate an image, and if you’re not completely happy with it, you can edit it directly in the app, or maybe you’re writing a cover letter. Instead of copying and pasting it to the notes app, you can make changes directly inside Claude. 

If you want to give the new model a whirl, Sonnet 3.5 is now available to all Claude users on web, and the company’s iOS app.

PRODUCT HIGHLIGHT
June 21st, 2024
This AI tool wants to save you hours of video editing time by automating it

Being a content creator can be hard work. For every video you watch, potential hours or even days' worth of work are pumped into it, from script-writing to shooting scenes to editing to distribution efforts. It’s a lot. 

One of those tasks is video editing, which can be notoriously difficult and time-consuming. Having to line up every clip perfectly and match audio is no small feat. What if there was a way to automate it? 

Cue content creator gasps.

Tellers is a new platform that aims to automate much of the video-editing process with the help of AI. All you have to do is provide a script or even an article, select a source for your video clips, and you will be presented with a ready-to-publish video in a few moments. It doesn’t require any prior editing experience, and if you’re not completely happy with the end result, you can dive back in, rearrange the clips, and upload new ones. 

Video generation isn’t anything particularly new when it comes to AI. Sora has been blowing minds for a few months, and I covered Luma AI’s Dream Machine only last week. Where Tellers differs is how it creates videos. Rather than generating a scene out of the blue, it pulls clips from existing sources the team has partnered with, like Pexels, and edits them to match the script you provided. 

One use case I kept thinking about when testing out Tellers is for startup marketing. Video is critical to marketing these days but what if you don’t want to spend the time and or the money to create one? With something like Tellers, you could theoretically pump out videos at a faster rate with less time and cash spent doing so.

PRODUCT HIGHLIGHT
June 20th, 2024
This new social app is replacing humans with AI charactersThis new social app is replacing humans with AI characters

Buckle up because we’ve officially arrived at the uncanny valley portion of AI. Until now, AI has typically been relegated to chatbots like ChatGPT or image-generation tools like Midjourney and DALL-E. It’s not like we have AI personalities running around social media or anything…

Meet Butterflies. It’s a new social app by ex-Snap engineering director Vu Tran. On the surface, you could mistake Butterflies for any number of Instagram clones. You have a grid-based profile, can follow others, and your timeline is littered with photos from “friends.” 

However, things quickly change once you start using it. Once you sign up, you’ll be asked to create an AI character or, as the app calls it, a “butterfly.” From there, your butterfly will start generating and sharing photos and interacting with other butterfly’s posts. There’s no limit to how many butterflies you can create, either.

During its private beta, Butterflies attracted tens of thousands of users and a healthy $4.8 million in funding from Coatue, SV Angel, and others. This week, it finally went live in both the iOS App Store and Google Play Store.

When I was playing around with it, I couldn’t shake the uncanny valley feeling. On the surface, it feels like a real, human-led social network, and anyone in passing would probably think you’re scrolling Instagram, but when you look a little closer, you notice things like people with seven fingers or three arms. After reading a few generated comments, I noticed the repetitive and hollow language common in AI models. 

But maybe that’s the point: As AI gets smarter and more human-like, so too will Butterflies. At the very least, it’s an interesting experiment where you can follow the progress of AI in real time in a more human setting instead of reading about the technical capabilities of different AI models. 

After all, caterpillars start off as something fairly unassuming before morphing into beautiful butterflies. Why can’t AI do the same? 

PRODUCT HIGHLIGHT
June 19th, 2024
This new app co-founded by an MCU star wants to make sharing fun again

We’ve all been there. You come across a TikTok video that is so funny that you can barely catch your breath from laughter, and you decide to send it to your friend. Surely, it will elicit the same response, but the only reply you get is a boring laughing face emoji, or worse, you’re left on read. 

It’s not you, it’s them. Don’t fret, though. A new app aims to solve this and bring the human charm back to social sharing.

Founded by serial entrepreneur and Moment founder, Faheem Kajee and Hollywood star Karen Gillan, known for playing Nebula in the MCU movies, Seen is a video messaging app that lets you send private videos to your friends in either one-on-one direct iMessages or group chats of up to 11 people. 

Say you come across a hilarious cat video online and want to share it with your friends. Seen lets you quickly drop the video in the group chat for your friends to view. Once your friends receive the content, they must record themselves reacting to the video. This reaction gets shared only with the friends in the chat, and in turn, the sender must then record themselves. 

One of the app's core features is its integration with TikTok, meaning you can easily share videos from directly inside the app, similar to how you would copy the link. Seen also incorporates a scrollable feed of some of the most popular videos on TikTok. According to the team, a Reels and YouTube Shorts integration is on the horizon. 

Seen is currently only available for iOS, and the team is hard at work building the next iteration, which will likely include a public feed of users and even monetization features like games and filters. It’s got some big-name backers as well, like Twitter co-founder Ev Williams and Twitch co-founder Kevin Lin

PRODUCT HIGHLIGHT
June 18th, 2024
This new feature from Warp lets users talk directly to their computer in plain English.This new feature from Warp lets users talk directly to their computer in plain English.

Imagine being a developer and telling your command line exactly what you need in plain English.

That’s the idea behind Warp’s newest product, Agent Mode. Agent Mode lives in the Warp Terminal and allows developers to write in plain English and receive step-by-step guidance through development workflows.

Agent Mode is not a code generation tool; it’s more of an always-on technical assistant in your terminal. According to Melanie Crissey, Warp's Product Marketing Manager, it represents a “paradigm shift” in how developers interact with their computers.

What can it do?

Unlike an external AI assistant, Agent Mode follows along in your terminal while you work. It learns about your project, identifies roadblocks, and understands the technologies you're using to make smart suggestions about your dev environment.

For example, if you’re setting up a Rails project but are unfamiliar with the command line commands, you can ask for guidance in plain English. The AI agent will walk you through each command you need to input.

Agent Mode can also execute tasks. In the announcement video, Founder and CEO Zach Lloyd sets up a database pooling connection. After Agent Mode explains the initial tasks, Zach asks it to set up a Docker file. The assistant seeks permission to run specific commands and instantly sets up the Docker file once approved. It will ask for permission anytime you request it to execute a command. 

There’s been a lot of talk about AI replacing developers. Remember Devin, the supposedly first AI-powered software engineer? The Warp team has decided to take a different, choosing to empower developers instead of replacing them. CEO Zach Lloyd said, “Instead of trying to replace software engineers with AI, Agent Mode unlocks productivity, tightening the feedback loop between humans and AI.”

If you want to give it a spin, you can download or update Warp and simply type “hello” and Agent Mode will be ready and waiting.

PRODUCT HIGHLIGHT
June 17th, 2024
This AI platform collaborates with big-name artists to make music accessible to all

Remember when The Beatles got together for one last song? It got mixed reviews, but the fact that one of the biggest bands of all time embraced AI set a precedent. AI was going to change the music industry. From writing songs to generating entire tracks, many predicted an AI-powered musical revolution.

TwoShot is one of those AI apps looking to make its mark on the music scene. Launched last week, It is focused on making music accessible to all by letting you craft tracks with your voice, text prompt, or just by humming into a mic. 

Say you’re working on a lo-fi track to stimulate productivity. You can instruct TwoShot to generate a “melody of flutes inspired by nature” and then pair it with a chill drum line by beatboxing into your mic. The AI will turn it into a full-fledged, professional-sounding track. 

It also comes with a library of over 200,000 samples, ranging from rock to country music and everything in between, that you can grab for inspiration or even use the AI to remix into something new to create your next banger. 

One of the most powerful features is TwoShot's plethora of different models. While building your track, you can swap your chosen AI model at any stage for a different one, including ones built by big names. Say you want your lyrics to have a female voice. You can load up the “Grimes” model with your prompt or existing sounds, and it will work it into a more Grimes-esque sound.

Alongside that, these models are “ethically trained,” according to the team, meaning they usually have attained artist permission to use their likeness or have worked with them to create the model itself, like in the case of the Grimes one above. 

Of course, TwoShot isn’t the only platform looking to change up the music scene. Spotify recently launched AI playlists, a new tool that lets you generate playlists with a prompt, and Meta launched AudioCraft last year as a tool for making songs with AI.

PRODUCT HIGHLIGHT
June 14th, 2024
Liveblocks just launched a new update to help you build collaboration features fasterLiveblocks just launched a new update to help you build collaboration features faster

Building apps has become easier over the past few years thanks to APIs, frameworks, and SDKs. These tools make it faster to implement features that, say, twenty years ago, would have taken weeks, if not months, to build by yourself. 

One of the features that is still notoriously difficult to implement is real-time collaboration. The cursors and comments of your teammates in your Figma design that you might take for granted is no small feat, and in today’s remote work world, reliable, real-time collaboration is critical to any team software. 

Liveblocks is a platform built to make it easier to implement these features. It was founded by Steven Fabre and Guillaume Salles in 2021 as a live presence API showing which team members were viewing a document. Since then, it has raised $5 million in seed funding to expand its capabilities, and the team just launched the latest update.

So what’s new? Since its initial launch, the team has doubled down on its efforts to become an all-in-one collaboration suite for your products. The latest update expands beyond live presence functionality to include more complex features like real-time text editing, live commenting, and real-time notifications when your team makes edits or suggestions. 

It also ships with “Realtime APIs,” so if something is missing that your team can’t live without, you can build it with the Liveblocks service.

The handy thing about this is that every feature includes fully-styled default components. If you need to implement comments and aren’t too fussy about the specifics, you can theoretically add them to your app in moments. Components even include dark mode by default.

It's in pretty good company too with companies like Zapier, Vercel, Hashnode, and others all using Liveblocks to power their suite of real-time collaboration features.

PRODUCT HIGHLIGHT
June 13th, 2024
This new AI video model could challenge OpenAI

Looking back at AI-generated videos from 2023 is like viewing cave drawings in a museum. It's been about a year since the Internet saw an AI-generated video of Will Smith eating spaghetti. If you haven’t seen it — be warned, it’s nightmare fuel.

Since then, AI video generation has advanced significantly. OpenAI set things in motion with the announcement of Sora earlier this year. Now, there's a new player on the block.

Luma AI, a California-based startup backed by Andreessen Horowitz and Nvidia, announced Dream Machine, a generative AI video model that creates high-quality, realistic shots from basic text prompts and images. It's currently in a free beta, but it’s been flooded with users, causing frequent crashes since it debuted on X (formerly Twitter) yesterday.

What does it generate? Almost anything. Like Sora, its output depends on the details of the prompt. It can create clips in various styles, from a hyper-realistic frog in city lights to a Studio Ghibli-inspired woman looking out a train window. And yes, someone recreated that spaghetti-eating clip.

How does it compare to Sora? Dream Machine can generate up to 120 frames of video in around 120 seconds, outperforming OpenAI’s Sora, which produces up to a minute of video but takes 10 minutes to an hour, depending on who you ask. When it comes to quality, it’s pretty strong, although through my own testing I noticed it can get a little funky when it comes to movement. Take this UFO-inspired video for example. The movement of the child towards the end has a certain stiffness to it. 

Who else is making waves? Dream Machine and Sora are not the only players in the generative AI video scene. Runway’s Gen 2, a multi-modal AI system for generating video, launched last year but now feels outdated. Kling, an AI model from China, also made a splash when it launched last week with new videos that could make OpenAI sweat.

PRODUCT HIGHLIGHT
June 12th, 2024
This Mac app combines all the major AI models under one subscription.This Mac app combines all the major AI models under one subscription.

In case you were hiking in the desert with no service, Apple revealed Apple Intelligence—a suite of interconnected AI features for summarization, text generation, and integration with ChatGPT.

The key selling point is the integration into the Apple ecosystem across iPhone, Mac, iPad, and Apple Watch. But what if you don’t want to wait? I mean, Apple Intelligence won’t be available until at least this autumn unless you try the beta — which comes with its own risks.

That’s partly why Invisibility launched yesterday. Invisibility is a Mac app that lets you use GPT-4o, Claude 3 Opus, Gemini, and Llama 3 under one subscription. Unlike Apple Intelligence’s more background AI approach, Invisibility is a chatbot widget at the bottom of your Mac screen that you can open with Option + Space.

Once you boot it up, you can switch AI models like TV channels. Pick your preferred model, and it’s ready to go. You can immediately ask it anything on your screen—no screenshotting needed. It can see what you’re seeing. 

Let’s talk about the Elephant in the room: Why use Invisibility when Apple Intelligence is around the corner? According to tech journalist Connor Jewiss, there are two main reasons: Apple didn’t show coding abilities in its demo, something the Invisibility team has prioritized as a use case for their offering.

Secondly, Apple integrates only with ChatGPT Pro. Invisibility connects to all major AI models under one subscription, so you don’t have to juggle multiple subscriptions, arguably the biggest selling point. As AI grows and new and better models hit the market, swapping tabs and managing subscriptions can easily become a pain point for power users. 

Invisibility isn’t the only tool that lumps different models into one interface. ChattyUI launched last week, it’s a web-based platform that lets you chop and change between different open-source models like Mistral, Gemma, and Llama 3. 

PRODUCT HIGHLIGHT
June 11th, 2024
Here are the highlights from Apple’s WWDC 2024 event Here are the highlights from Apple’s WWDC 2024 event

It’s that time of year again when Apple announces new software goodies — I’m talking about WWDC. The 2024 event started with Craig Federighi leading an Apple team as they jumped out of a plane. It was low-key giving James Bond at the Olympics vibes, but I’m here for it. 

A lot was announced in the 90-minute pre-recorded event, from the latest iPhone software to Apple finally announcing its AI plans. Here’s a quick round-up: 

Apple Intelligence:Apple Intelligence” is Apple’s moniker for its suite of interconnected AI features. It can generate pieces of text for you, summarize emails, and write replies, and you can even generate custom emojis now. Alongside that, Siri got a new AI-powered look with more in-app functionality and ChatGPT integration. 

iOS18: It wouldn’t be WWDC without a big iOS update. The newest iPhone software includes customization options like icon themes (sorry, designers) and the ability to place icons wherever you want. iMessage now has formatting options and new emoji reactions. It supports RCS (Rich Communication Services), a messaging protocol built for Android that lets users send more multimedia-rich texts, and you can text via satellite. The Photos app will have AI-assisted filtering and organization, catching it up to Google Photos.

MacOS Sequoia: Apple continued its tradition of naming MacOS after landmarks in the US, this time after Sequoia National Park (I still miss the big cats). New to Mac is a much better iPhone mirroring feature that lets you use your phone on your Mac more seamlessly. It’s great for demos and developing mobile apps. Mac also gets window tiling, a native tool for organizing your app windows, something makers have been solving for years with tools like Magnet and Cinch.

Alongside those announcements, Apple also announced iPadOS 18, which finally got a calculator app that includes an AI-powered Apple Pencil mode, WatchOS 11, and Vision OS 2, which comes with new spatial video enhancements. The Apple Vision Pro is also being released in more countries.

PRODUCT HIGHLIGHT
June 10th, 2024
Remember more information with this AI toolRemember more information with this AI tool

The internet is kind of like a double-edged sword. On one side, you have instant access to a limitless amount of information through articles, research papers, blogs, podcasts, and videos. Conversely, there’s so much information that you’ll likely forget what you consumed within a week.

That’s the exact problem Paul Richards, co-founder of Recall, was dealing with. No matter how many knowledge management tools he tried, he always spent more time organizing his notes than actually consuming the information. 

“When my frustration reached an all-time peak, I decided to take matters into my own hands and built my own tool.” Active Recall is the next step in Paul and his team’s mission to to bring order to information chaos. 

It’s an AI-powered web-based tool that aims to help you remember more of what you consume by utilizing better categorization, contextual recollection, and even some gamification. When you stumble upon something interesting, click the Recall icon in your browser bar, from there, Recall will get to work summarzing the piece based on key points. 

Once done, it will add it to your knowledge base and categorize it based on what it mentions. If it’s an article about GPT-5 rumors, it will likely mention OpenAI, Sam Altman, AI, and large language models. This is to make it easier for users to resurface content. Rather than relying on remembering the title or publication, you can just remember what it was about. 

It gets really interesting how it ties different bits of information together as you’re actively consuming content. Say you’re reading an article about Elon Musk. You might have a piece about colonizing Mars saved, one of Musk’s and Space X’s main goals. Recall will remember this and resurface it to you in the right context. 

Alongside that, Recall also gamifies information retention with things like AI-generated information quizzes, which ideally should help you retain more knowledge about what you’ve read. If you want to try it out, the team is offering 40% off for today only. 

PRODUCT HIGHLIGHT
June 7th, 2024
This AI-powered video app wants to make it easier to go viral

Video editing software has looked more or less the same for the past decade. If you make videos for a living or hobby, you know what I’m talking about. A bin of content, a timeline, a preview screen, and space for effects and plugins.

While video editing software hasn’t changed much, video content itself has. It’s shorter and pumped out faster, a lot of it vertical, and oftentimes, it’s more than one person on camera, like on podcasts.

Originally launched in 2022 as a Mac App, the latest Detail update is pitched as a supercharged video tool for today’s content creators. It combines a camera with an AI-powered video editor that prioritizes features to help you quickly go from idea to potential viral video. 

Here’s the lowdown: 

Templates: Detail now provides ready-to-use templates for podcasters, reaction videos, and educational content, allowing creators to start professional-looking projects immediately. You can edit these by adding live effects and virtual backgrounds instead of a greenscreen.

Text-based editing: Similar to Descript’s Underlord tool. Detail lets you easily edit your video on your phone by chopping and changing the text. So, no more fidgeting with inaccurate timelines.

Multi-Device Recording: Detail supports multi-camera setups, allowing for synchronized multi-angle recordings, which is ideal for creating polished YouTube vlogs or Instagram Reels. You can then combine the two videos in any layout.

Cinematic Effects: What if you want your video to look like Oppenheimer? AI can quickly generate filters based on your favorite movies and shows and apply them to your video.

Captions and clips: Once you’re happy with your video, you can quickly add AI-powered captions to ensure your content is clear across whatever platform you post on. I mean, we all watch TikTok without sound from time to time. It will also cut up your clip into bite-sized chunks, which is good for viral content across different platforms.

PRODUCT HIGHLIGHT
June 6th, 2024
Second handles codebase maintenance so you can focus on shipping

Want to scare a developer? Dust off some legacy code without comments and ask them to update and maintain it. I joke, don’t do that. Any good developer will tell you you can’t just ship and forget it. Regular maintenance is needed to ensure your codebase works as it’s meant to, especially when it comes to something like upgrading syntax and migrating frameworks.

Second is an AI companion designed to automate the process of maintaining codebases so engineers can focus on innovating. It was founded by Eric Rowell, co-founder and former CTO of Uiflow, a no-code tool that lets developers quickly create custom web applications. Eric has been around the block; this is his second time putting a company through YC.

So, how does it work? It’s pretty straightforward. As mentioned, Second automates codebase maintenance, including codebase migrations and upgrades. All you have to do is connect it to your GitHub repo, select whatever maintenance module you need, such as the Angular to React option, review the AI agent's plan of action, and hit run. 

From there, Second’s AI agents will get to work generating what should be a high-quality pull request before sending it off to your team for review. 

According to Eric, developers are already using second for all sorts of tedious jobs that would otherwise stop them from focusing on building and shipping, like “AngularJS to React migrations, JavaScript to TypeScript migrations, feature flag cleanup, language upgrades, test generation, and more.”

So, instead of scaring a developer, why not put a smile on their face and introduce them to Second?

Top Launches:
Second V2
Second V2
Fliki
Fliki
Steer
PRODUCT HIGHLIGHT
June 5th, 2024
Descript’s newest tool uses AI to help creatives automate tedious tasksDescript’s newest tool uses AI to help creatives automate tedious tasks

Since AI burst onto the scene, a lot of talk has focused on its ramifications, especially in the creative world, where it could pose a threat to artists, writers, photographers, and more. But there is another side to it—one where, rather than replacing, AI helps these fields produce great content faster and cheaper

Descript, long celebrated for making podcast creation easier with its intelligent speech recognition and editing tools, has just launched Underlord. It’s an AI tool designed to help you with some tedious video editing tasks while leaving you in control. 

Here’s the lowdown:

Audio: Underlord cleans up audio, ensuring high-quality sound for your videos. It removes background noise and optimizes audio levels to ensure you always sound your best.

Script Writing and Brainstorming: One of the toughest challenges in content creation is facing the blank page. Underlord helps overcome this by brainstorming ideas and generating script outlines for video projects. It provides creative suggestions, structures your thoughts, and offers a foundation to build upon.

Composition: Underlord automatically addresses composition issues. It ensures eye contact, centers the speaker, and generates images to display alongside the video. Additionally, it adds multicam angles where needed to enhance the visual experience.

Repurposing: This feature automatically clips videos to identify segments with the highest viral potential. It can add captions and titles, and resize videos for different social media platforms, ensuring your content is optimized for each platform's unique requirements.

If you want to test Underlord with your next video, you can do so now. The Descript team is also hosting a live demo tomorrow if you want to see how it can work for you. 

PRODUCT HIGHLIGHT
June 4th, 2024
The founder of Oculus is back with a new console that’s dripping in nostalgiaThe founder of Oculus is back with a new console that’s dripping in nostalgia

Since founding Oculus VR and eventually selling it to Meta, Palmer Luckey has continued experimenting with unusual tech, like this VR headset that could kill you if you lose a game. Now, he's focusing on a much safer and fun project: the ModRetro Chromatic.

The ModRetro Chromatic is a handheld console designed to run any Nintendo Game Boy cartridges. While it retains the nostalgic look of Nintendo’s handheld gaming era, the Chromatic features several modern enhancements aimed at bringing the childhood console into the modern era

It’s housed inside a magnesium alloy case and sapphire crystal cover glass, which keeps the original Game Boy’s size, resolution, pixel structure, and color balance. With 1,000 nits of brightness and an outdoor-friendly LCD screen, you can game to your heart’s content in most lighting conditions. 

The ModRetro Chromatic’s FPGA technology allows it to play any official Game Boy and Game Boy Color cartridges, just like the original devices. Additionally, each console includes an officially licensed version of Tetris, featuring updated versions of the iconic theme song and Link Cable multiplayer support. It also comes in a number of fun color options and a USB-C port to charge the AA batteries needed to power it. 

This isn’t the first reimagining of classic handheld devices. The Analogue Pocket has been a fan favorite for gamers everywhere since it dropped. Where the two differ, according to Luckey, is in the fine details: “The color temps are actually right, the clock rate isn’t slightly off, the pixel structure isn’t totally wrong in a way that ruins subpixel aware sprites, etc,” he told The Verge.

PRODUCT HIGHLIGHT
June 3rd, 2024
Stanford PhDs release their faster, ultra-realistic generative voice model

Voice technology has lagged for the last decade. We started envisioning a futuristic universe when we got voice assistants like Siri, but haven’t got much further than speakers that can turn on a light. One company called Cartesia wants to take it that bit further with AI. 

Cartesia Sonic is an AI voice model – a state space model (SSM) that the founders invented while working as PhDs at the Stanford AI Lab. The team (which also boasts backgrounds from Google Brain and Snorkel AI) has spent years building the theory behind SSMs, now being used in academia and industry for vision, robotics, and biology. 

The SSM architecture enables Sonic to quickly process vast amounts of data, providing seamless and natural voice interactions. With Sonic, you can generate high-quality, lifelike speech with minimal latency. The AI boasts a response time of just 135ms, making it ideal for applications requiring real-time feedback, such as customer support, entertainment, and content creation.

One of the standout features is Sonic’s ability to customize voices on the fly. Users can adjust parameters like speed and emotion and instantly clone voices for different needs.

Sonic also excels in speech generation, achieving a 2x lower word error rate and a 1-point higher quality score than traditional models. This ensures that Sonic generates speech effectively and understands it accurately.

Cartesia's larger agenda is to expand well beyond voice — aiming for its models to instantly understand and generate content in any modality across any device. 

If you want to try out Sonic now, you can dive into the web playground, or if you’re a dev, you can reach out to the team to partner on building a real-time, conversational AI platform. 

Top Launches:
Ginix
Ginix
Eve
Eve
Artizyou
Artizyou