Snap teased an early version of the real-time, on-device image diffusion model that promises to generate some really vibrant AR experiences on Tuesday at the Augmented World Expo. But for now, a broader and more important move for the company was presenting generative AI tools for AR creators.
On-stage, Snap co-founder and CTO Bobby Murphy said the model is small enough to run on a smartphone and fast enough to re-render frames in real time guided by a text prompt.
As Murphy noted, the generative AI image diffusion models have really been exciting, but those models need to be an order of magnitude faster for them to be impactful for AR, so his teams have worked to speed up machine learning models.
These Lenses, that Snap says will help wield this generative model, will soon appear before Snapchat's users. The company plans to deploy it for creators by the end of the year.
"This, and future real-time on-device generative ML models speaks to exciting new directions for augmented reality-and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether," said Murphy.
The company added that Lens Studio 5.0 is launching today for developers with access to new generative AI tools, which will help them create AR effects much faster than is currently possible, saving them weeks and even months.
AR Creators will be able to create selfie Lenses by creating highly realistic ML face effects. And so, they can also generate stylization effects in custom as applied over the user's face and the entire surrounding in real time. Including a 3D asset in their Lenses, Creators will be able to produce in just minutes.
AR creators can also use text or image prompts of the company's Face Mesh technology to create characters like aliens or wizards. They can generate face masks, texture, and materials within minutes.
The new version of Lens Studio also presents an AI assistant that can answer questions that the AR creators may have.