Adobe encourages you to "embrace the tech" with the new video generator from Firefly.

Video generation capabilities for its Firefly AI platform were offered by Adobe ahead of Monday's Adobe MAX event.
Adobe encourages you to "embrace the tech" with the new video generator from Firefly.

Video generation capabilities for its Firefly AI platform were offered by Adobe ahead of Monday's Adobe MAX event. Its new Generative Extend feature, an AI-powered video capability that premiers today, or the ability to try out Firefly's video generator for the first time on Adobe's website.

Users can experiment with a text-to-video model, or an image-to-video model, both generating up to five seconds of AI-generated video on the Firefly website. Free to use, but likely has rate limits to the web beta.

Adobe claims that it taught Firefly to produce both animated content and photo-realistic media, to the requirements of a prompt. Firefly is theoretically able to produce videos with text, which AI image generators have classically been unable to do. The web app version of Firefly's video settings include "toggle-pans," how strong the camera should move or how much its angle should change, and shot size.

The add-on Firefly's Generative Extend feature will be available only within the Premiere Pro beta app and extend a video clip up to two seconds, generating the extra beat in the scene, continuing camera motion and movement of the subject. Audio in the background will also extend - the public's first taste of the AI audio model that Adobe has been secretly working on. Still, the audio extender does not try to clone voices or music to stay away from copyright lawsuits from record labels.

In demos that Firefly provided to TechCrunch ahead of launch, the Generative Extend feature of Firefly produced more impressive videos than its text-to-video model and seemed more practical. The text-to-video and image-to-video model are not quite on the same level as Adobe's competitors in AI video, such as Runway's Gen-3 Alpha or OpenAI's Sora (though admittedly, the latter has yet to ship). Adobe says that it focused more on AI editing features than generating AI videos, perhaps to appease its audience.
Adobe's AI features have to walk a very fine line with its creative audience. It's trying to lead in a crowded space of AI startups and tech companies demoing impressive AI models. Or, on the other hand, lots of creatives aren't happy that AI features may soon replace the work they've done with their mouse, keyboard, and stylus for decades. That's why Adobe's first Firefly video feature, Generative Extend, uses AI to solve an existing problem for video editors: your clip isn't long enough, rather than generating entirely new video from scratch.

"Our audience is probably the most pixel-perfect on Earth," says Alexandru Costin, Adobe's VP of generative AI. "They want AI to help them extend the assets they have, create variations of them or edit them versus generating new assets. So for us it's very important to do first generative editing then generative creation."

Production-level video models that simplify editing - that's the secret Adobe discovered early on for the image model in Photoshop. Executives at the company say that Photoshop's Generative Fill is one of the most used new features of the last decade, in part because it complements and accelerates already-existing workflows. The company hopes to repeat that success with video.

According to reports, Adobe is going out of its way to seem considerate of creatives. The firm said the photographers and artists would get paid $3 for every minute of video uploaded to train its Firefly AI model. That may help secure the goodwill of many more creatives, however-most still fear that they are going to be replaced by AI tools. Adobe on Monday announced AI tools for advertisers to automatically generate content.

According to Costin, these concerned creatives "will not see less work; they will see more demand for the work that they do. It's infinite demand," he declares, "if you think about the needs of companies wanting to create individualized and hyper-personalized content for any user interacting with them.".

The head of AI at Adobe suggests people reflect on how other major technological shifts have positively impacted creativity-literate individuals. Digital publishing and the advent of digital photography serve as two examples he points to as possibly mirroring what can be expected from the impact of tools enabled by AI. "The transition when these technologies were emerging was frightening," he noted. "So, if creatives just reject AI, they are going to have a hard time.".

"Take advantage of generative capabilities to uplevel, upskill, and become a creative professional that can create 100 times more content using these tools," said Costin. "The need of content is there, now you can do it without sacrificing your life. Embrace the tech. This is the new digital literacy."

Firefly will also automatically insert "AI-generated" watermarks in the metadata of videos created this way. Meta applies identity tools both in Instagram and Facebook for classifying content with those labels as AI-generated. The thinking is that platforms or individuals can apply AI identity tools such as this, so long as its content has the right metadata watermarks attached, to determine what is and isn't authentic. Adobe's videos, however, will not natively carry any such visible labels, indicating them to be AI-generated in a human-readable manner.

Specifically, the company designed Firefly to produce "commercially safe" media. The firm claims it didn't train Firefly on images and videos containing narcotics, nudity, violence, politicians, or copyrighted material. This should, at least theoretically, mean that Firefly's video generator will not produce "unsafe" videos. Now that the internet has free access to Firefly's video model, we shall see whether that is really the case.

Blog
|
2024-10-15 18:06:58