Google DeepMind's Lyria 3 Powers New AI Music Tool in Google Labs
By admin | Feb 24, 2026 | 3 min read
Google announced on Tuesday that its generative AI music platform, ProducerAI, will join Google Labs. The tool, which has backing from The Chainsmokers, enables users to generate music by typing natural language prompts, such as “make a lofi beat.” It operates using Google DeepMind’s Lyria 3 model, a system capable of transforming text and even image inputs into audio.
Last week, Google revealed plans to integrate Lyria 3’s capabilities into its flagship Gemini app. ProducerAI, however, is designed to foster a more interactive experience. Elias Roman, Senior Director of Product Management at Google Labs, described it as enabling users to communicate with the AI like a “collaboration partner.”
“ProducerAI has allowed me to create in new ways,” Roman wrote in a blog post. “I’ve experimented with new genre blends, expressed how I feel with personalized birthday songs for my loved ones, and made custom workout soundtracks for myself and friends.”
The company also highlighted that three-time Grammy-winning rapper Wyclef Jean utilized the Lyria 3 model and Google’s Music AI Sandbox for his recent track “Back From Abu Dhabi.”
In a company video, Jeff Chang, Director of Product Management at Google DeepMind, emphasized the tool’s collaborative nature. “This is not just a machine where you’re clicking a button a hundred times, and then you’re done. It’s a careful kind of curation where you’re going through and saying, ‘Oh, I think that’s something we can use,’” he said.
Jean shared an example of wanting to hear how a flute would sound on an existing recording and using Google’s tools to instantly add it. “What I want everybody to understand […] is you’re in the era where the human has to be the most creative,” Jean stated in the video. “There’s one thing that you have over the AI: a soul. And there’s one thing that AI has over you: the infinite information.”
**AI in the Music Industry**
The adoption of AI in music has sparked significant debate. Many musicians strongly oppose these tools, largely because generative AI models are typically trained on copyrighted data without artists' consent. In 2024, hundreds of artists, including Billie Eilish, Katy Perry, and Jon Bon Jovi, signed an open letter urging tech companies not to undermine human creativity with AI music generation.
Legal challenges are also mounting. A group of music publishers recently sued the AI company Anthropic for $3 billion, alleging it illegally downloaded over 20,000 copyrighted songs, including sheet music and lyrics. This follows a prior court order for Anthropic to offer a $1.5 billion settlement to authors whose books were pirated for AI training.
Conversely, some artists are embracing AI for practical enhancements rather than as a primary creative tool. Paul McCartney used AI-powered noise reduction technology—similar to systems that filter background noise on video calls—to restore a decades-old, low-quality John Lennon demo. The resulting “new” Beatles song, “Now and Then,” won a Grammy in 2025.
Meanwhile, AI music generators like Suno are producing synthetic tracks convincing enough to chart on Spotify and Billboard. Telisha Jones, a 31-year-old from Mississippi, used Suno to transform her poetry into the viral R&B song “How Was I Supposed To Know” and secured a record deal with Hallwood Media reportedly worth $3 million.
The legal landscape surrounding the use of copyrighted material for AI training remains ambiguous. Last year, federal judge William Alsup ruled that training on copyrighted data is legal, but pirating the material is not.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!