During its 2024 Partner Summit today, alongside announcements of new AR glasses, an updated user interface, and other features, Snapchat revealed that users will soon be able to generate short video clips within the app using text prompts.
As illustrated in this example, Snapchat is on the verge of launching a new feature that will generate short video clips based on any text input you provide.
For instance, you could type in “rubber duck floating,” and the system would create a video clip reflecting that prompt. There will also be a “Style” option that allows you to refine and customize your video according to your preferences.
Snapchat mentions that the system will eventually have the capability to animate images as well, significantly enhancing its current AI offerings.
In fact, this feature surpasses the AI processes currently available from both Meta and TikTok. While both Meta and ByteDance have their own functioning text-to-video models, these are not yet integrated into their respective apps.
However, Snap's feature isn't fully launched either. The AI video generator will be available to a small group of creators in beta starting this week, but there’s still more development needed before it’s rolled out to a wider audience.
In some respects, Snap is getting ahead of the competition, but it's worth noting that either Meta or TikTok could quickly implement their own versions to catch up.
Videos created with this tool will feature a Snap AI watermark (as seen with the Snapchat+ icon in the upper right corner of the examples shown in the presentation). Additionally, Snap is actively working to prevent the tool from facilitating potentially harmful uses of generative AI.
Snapchat also introduced various other AI tools to support creators, including its GenAI suite for Lens Studio, which will enable text-to-AR object creation, streamlining the process.
Snapchat is also introducing animation tools that follow the same logic, allowing users to animate Bitmoji within their AR experiences. All these options leverage AI to enhance and streamline Snap’s various creative processes.
However, AI-generated video still feels somewhat unconventional and may not align well with Snapchat’s core focus of sharing genuine, real-life experiences with friends.
The question remains: Do users truly want to create hyper-realistic AI videos to share within the app? Will this feature enhance the overall Snap experience, or could it detract from the platform’s emphasis on authenticity?
I understand why social platforms are pursuing this direction, aiming to capitalize on the AI trend to boost engagement and validate their investments in AI technologies. However, I question whether social apps, which thrive on genuine human experiences, truly benefit from AI-generated content. Such content isn’t real, hasn’t actually occurred, and doesn’t reflect anyone's lived experiences.
Perhaps I’m missing the broader perspective, and there’s no denying that the technological advancements behind these tools are impressive. Still, I struggle to see it becoming a significant feature for Snapchat users. It may serve as a novelty, but as a lasting, engaging function? Probably not.
Regardless, Snap is clearly eager to align itself with the AI hype, aiming to keep pace with competitors. If it has the ability to implement these features, then why not?
Although it's not quite ready for a full launch, it appears to be on the horizon.