The surprising success of the AI-generated band Velvet Sundown, which accumulated over 1 million streams on Spotify before their true nature was revealed, has thrust the issue of transparency in AI music into the spotlight. This incident marks a pivotal moment, forcing the music industry to confront the implications of artificial intelligence on artistry, intellectual property, and consumer perception.
Velvet Sundown’s journey from viral sensation to controversial exposé began with two albums presented as human creations. The subsequent admission that their music, images, and backstory were all AI-generated, specifically using the Suno platform, has sparked a vigorous debate. The band itself, after some equivocation, ultimately confirmed its Not quite human. Not quite machine identity.
Industry heavyweights are now vociferously calling for legal frameworks to ensure clear labeling of AI-generated content on streaming platforms. Roberto Neri of the Ivors Academy emphasizes serious concerns around transparency, authorship and consent, while Sophie Jones of the BPI advocates for new transparency obligations for AI companies and explicit labeling requirements. They contend that the current lack of regulation creates an uneven playing field and can exploit artists.
The debate extends to how AI models are trained, with concerns that artists’ work is being used without authorization or payment. As seen in the 2023 case involving AI-generated vocals of The Weeknd and Drake, the potential for copyright infringement is significant. While some platforms like Deezer are proactively tagging AI music, the broader industry, including Spotify, faces increasing pressure to adopt comprehensive policies that protect human creativity and provide listeners with essential information.
The Rise of AI Music: Is Transparency The Next Big Battle?
45