Adobe is also developing generative video technology.

Adobe says it is developing an AI model that creates video. It won't say when this model is coming, exactly — or much of anything about it, except that it exists.
Adobe is also developing generative video technology.

Adobe says it is developing an AI model that creates video. It won't say when this model is coming, exactly — or much of anything about it, except that it exists.

In a form of response to OpenAI's Sora, Google's Imagen 2 and models from the growing number of startups in the nascent generative AI video space, Adobe's model — part of the company's expanding Firefly family of generative AI products — will make its way into Premiere Pro, Adobe's flagship video editing suite, sometime later this year, Adobe says.

Like many generative AI video tools out today, Adobe's model makes footage from scratch (either from a prompt or reference images) — and it powers three new features in Premiere Pro: object addition, object removal and generative extend.

They're pretty self-explanatory.

Object addition enables users to draw a portion of a video clip-the top third, say, or the lower left-to enter a prompt that injects objects inside that portion. During TechCrunch's briefing on the tool, an Adobe representative demonstrated a still of a real-world briefcase full of diamonds created by Adobe's model.
Object removal extracts objects from clips-boom mics and coffee cups looming in the background of a shot, for instance.

And as for generative extend, which adds a few frames to the beginning or end of a clip (the company wouldn't say how many frames), it's not intended to fill in entire scenes, but to add buffer frames to sync up with a soundtrack or hold on to a shot for another beat-to add emotional heft.

To fight the fear of deepfakes that can't help but accumulate around generative AI tools like this, Adobe says it's packing Content Credentials - metadata for identifying AI-generated media - within Premiere. Content Credentials is a media provenance standard that Adobe has endorsed through its Content Authenticity Initiative; the feature was already in Photoshop and part of Adobe's image-generating Firefly models. In Premiere, they will be able to identify not only which content was AI-generated but which AI model was used to generate it.

I asked Adobe what data — images, videos and so on — were used to train the model. The company wouldn't say, nor would it say how (or whether) it's compensating contributors to the data set.

Adobe is paying up to $120 to photographers and artists submitting short video clips to train its video generation model on Adobe Stock, Bloomberg quoted sources close to the matter as saying. Pay reportedly ranges from around $2.62 a minute of video to around $7.25 a minute, depending on the nature of the submission, but higher quality footages yield correspondingly more substantial payments.

That would be a departure from Adobe's current arrangement with Adobe Stock artists and photographers whose work it is using to train its image-generation models. The company pays those contributors an annual bonus, not a one-time fee, based on the volume of content they have in Stock and how it is being used — albeit a bonus that's subject to an opaque formula and not guaranteed from year to year.

Bloomberg's report, assuming it is accurate, paints an approach in contrast to that taken by generative AI video competitors such as OpenAI, which the company scrapes public web data and videos from, say, YouTube, for training its models. YouTube CEO Neal Mohan recently said use of videos on the service to train OpenAI's new text-to-video generator would be a term of service violation, further illuminating the legal tenuousness of OpenAI's and others' fair use argument.

Companies like OpenAI are being sued over claims they violate the protections of IP law with the way they train their AI on copyrighted content without giving owners credit or pay. Adobe appears determined not to meet this end, similar to some of its intermittent generative AI competitors Shutterstock and Getty Images, who also have deals in place to license their model training data, and - with its IP indemnity policy - position itself as a verifiably "safe" option for enterprise customers.

Regarding pricing, Adobe wouldn't comment on how much this will cost customers to access the video generation features of Premiere; presumably, pricing's still being hashed out. However the company did say the payment scheme will follow the credit system generative scheme established when it first broke out with Firefly models.

For Creative Cloud subscribers who pay for the service, generative credits are refreshed each month, with allocations ranging from 25 to 1,000 per month based on the level of service. As a rough rule of thumb, more complex workloads, such as higher resolutions of generated images or multiple generations of images, will require more credits.
But what's really in my mind is, will Adobe's AI-energized video features be worth whatever they end up costing?

All of this up to the present has made the Firefly image generation models a pretty savage defeat in comparison to Midjourney, OpenAI's DALL-E 3 and other competing tools as underwhelming and riddled with bugs. There is not much impetus created by a lack of any stated time frame for the video model's release that it will avoid being in the same category. Neither does that fact that it was denied live demonstrations of object addition, object removal and generative extend — and given instead a pre-made sizzle reel.

Perhaps to hedge its bets, Adobe says that it's also in talks with third-party vendors about integrating their video generation models into Premiere, too, to power tools like generative extend and more.

One of those vendors is OpenAI.

Adobe claims it's collaborating with OpenAI to add Sora to the Premiere workflow. It behooves there to be some sort of OpenAI connection: this AI company just made a few public overtures toward Hollywood; coincidentally, OpenAI CTO Mira Murati is among those attending this year's Cannes Film Festival. Other early partners include Pika, developing AI for generating and editing video content, as well as Runway-one of the first companies to market with a generative video model.

I believe that the next steps would be open to a collaborative approach," an Adobe rep explained.

To clarify, at this point in time, these integrations are more of a thought experiment than work-in-progress. Adobe told me multiple times that they are in "early preview," a "research," or simply put, not a thing that customers should expect to play around with anytime soon.

And that, in my opinion, catches the broadest and general tone of the generative video presser by Adobe.

Adobe is surely signaling here, with these announcements, that it is thinking about generative video — if only at the very, very early stages. It would be crazy not to: being overtaken in the race of generative AI seems, all other things being equal, to be losing a valuable new source of revenue, though economics will eventually break in favor of Adobe. (AI models are expensive to train and run and serve after all.)

The thing it's showing-contents of concepts not very exciting, to say the least. More sure innovations with Sora in the wild and by-and-by the rest that is expected to follow down the pipeline show much is left for the company to prove.

Blog
|
2024-10-06 18:57:16