DeepMind and YouTube Unveil Lyria, a Generative AI Model for Music, Along with Dream Track to Create AI-Generated Tunes.

It was in January when Google quietly dropped some research it had been doing on AI-based music creation software that build tunes based on word prompts.
DeepMind and YouTube Unveil Lyria, a Generative AI Model for Music, Along with Dream Track to Create AI-Generated Tunes.

It was in January when Google quietly dropped some research it had been doing on AI-based music creation software that build tunes based on word prompts. It sent waves crashing over the tech world. Today, its sister business Google DeepMind took it a few steps further-it unveiled a new model for music generation called Lyria, which will cooperate with YouTube and two toolsets that it is presenting as "experiments" developed on Lyria: Dream Track lets you compose music for YouTube Shorts, and Music AI is a set of tools that it says are to help in the creative process-for example, building out a tune from a clip that a creator might hum. With all this, DeepMind also stated that it's changing SynthID-a method to mark AI images — for use in AI music as well.

It's almost fitting that these new tools are emerging now when AI is as contentious within the creative arts as it ever has been. This was one of the biggest points at the heart of the Screen Actors Guild strike, which finally ended this month. And in music, even as everyone knew that Ghostwriter was using AI to mimic Drake and The Weeknd, the real question you have to ask is whether AI creation is going to become more and more the norm in the future.

This should be interesting: will new tools announced today, a top priority for DeepMind and YouTube, focus on creating technology that enables AI music to stay credible both as an accompaniment to today's creators but also, just in the most aesthetic sense, sound like music?.

As Google's past efforts have shown, one detail that often emerges is that the longer one listens to AI-generated music, the more distorted and surreal it starts to sound, moving further from the intended outcome. As DeepMind explained today, that's in part because of the complexity of information that is going into music models, covering beats, notes, harmonies and more.

When generating long audio sequences, it's difficult for AI models to keep the music coherent over phrases, verses, or even longer passages," the company said today. "Because music often includes several voices and instruments playing together, it is much more difficult to create than speech."

It is surprising, then, that some of the first applications of the model are showing up in shorter pieces.

Dream Track is launching in the beta with a limited group of creators who will compose 30-second AI-generated soundtracks in the voice and musical style of artists such as Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, Troye Sivan, and Papoose.

He makes a selection of a song, picking an artist. Now he puts all this music, the track with some words, the support tracks, and the voice by the selected artist together for a 30-second edit intended to be used as Shorts. For instance Charlie Puth

YouTube and DeepMind can also visibly brag and boast that those artists will work on this project. Engaging with testing models they further feed them with more knowledge and insight.

As head of music for YouTube, Lyor Cohen, and the company's VP of emerging experiences and community projects, Toni Reed, note, these Music AI tools are being released from the company's Music AI Incubator-a group of artists, songwriters, and producers working on testing and giving feedback on projects.

"They were also extremely inquisitive about AI tools that might push the frontiers of what they thought was possible," they remark. "They also desired tools that would support their creative process."

Dream Track is launching a small drop today, but the Music AI tools are only coming this fall, said. They tease three areas they will cover: creating music in a specified instrument, or creating a whole set of instrumentation based on humming a tune; using chords that you build on a simple MIDI keyboard to create a whole choir or other ensemble; and building backing and instrumental tracks for a vocal line that you might already have. Or, in fact, a mashup of all three, starting just with a hum.

Google and Ghostwriter certainly aren't alone in this new field of music creation, though. Among others deploying tools are Meta, which open sourced an AI music generator in June; Stability AI launched one in September; and startups like Riffusion are also raking it in for their efforts in the genre. The music industry is bracing, too.

Blog
|
2024-11-06 20:16:36