Like many AI companies, music generation startups Udio and Suno appear to have relied on unauthorized scrapes of copyrighted works in order to train their models. This is by their own and investors’ admission, as well as new lawsuits filed against them by music companies. If these suits go before a jury, the trial could be both a damaging exposé and a highly useful precedent for similarly sticky-fingered AI companies facing certain legal peril.
The lawsuits, filed by the Recording Industry Association of America, put us all in the uncomfortable position of rooting for the RIAA, which for decades has been the bogeyman of digital media. I myself have received nastygrams from them! The case is simply that clear.
The gist of the two lawsuits, which are extremely similar in content, is that Suno and Udio (strictly speaking, Uncharted Labs doing business as Udio) indiscriminately pillaged more or less the entire history of recorded music to form datasets, which they then used to train a music-generating AI.
And here let us quickly note that these AIs don’t “generate” so much as match the user’s prompt to patterns from their training data and then attempt to complete that pattern. In a way, all these models do is perform covers or mashups of the songs they ingested.
That Suno and Udio did ingest said copyrighted data seems, for all intents and purposes (including legal ones), very likely. The companies’ leadership and investors have been unwisely loose-lipped about the copyright challenges of the space.
They have admitted that the only way to create a good music generation model is to ingest a large amount of high-quality music. It is very simply a necessary step for creating machine learning models of this type.
Then they said that they did so without the permission of music labels. Investor Antonio Rodriguez of Matrix Partners told Rolling Stone just a few months ago:
Honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.
The companies told the RIAA’s lawyers that they believe the media it has ingested falls under fair-use doctrine — which fundamentally only comes into play in the unauthorized use of a work. Now, fair use is admittedly a complex and hazy concept in idea and execution, but the companies’ use does appear to stray somewhat outside the intended safe harbor of, say, a seventh-grader using a Pearl Jam song in the background of their video on global warming.
To be blunt, it looks like these companies’ goose is cooked. They might have hoped that they could take a page from OpenAI’s playbook, using evasive language and misdirection to stall their less deep-pocketed critics, like authors and journalists. (If by the time AI companies’ skulduggery is revealed and they’re the only option for distribution, it no longer matters.)
But it’s harder to pull off when there’s a smoking gun in your hand. And unfortunately for Udio and Suno, the RIAA says in its lawsuit that it has a few thousand smoking guns and that songs it owns are clearly being regurgitated by the music models. Its claim: that whether Jackson 5 or Maroon 5, the “generated” songs are lightly garbled versions of the originals — something that would be impossible if the original were not included in the training data.
The nature of LLMs — specifically, their tendency to hallucinate and lose the plot the more they write — precludes regurgitation of, for example, entire books. This has likely mooted a lawsuit by authors against OpenAI, since the latter can plausibly claim the snippets its model does quote were grabbed from reviews, first pages available online and so on. (The latest goalpost move is that they did use copyright works early on but have since stopped, which is funny because it’s like saying you only juiced the orange once but have since stopped.)
What you can’t do is plausibly claim that your music generator only heard a few bars of “Great Balls of Fire” and somehow managed to spit out the rest word for word and chord for chord. Any judge or jury would laugh in your face, and with luck a court artist will have their chance at illustrating that.
This is not only intuitively obvious but legally consequential as well, as the recreation of entire works (garbled, but quite obviously based on the originals) opens up a new avenue for relief. If the RIAA can convince the judge that Udio and Suno are doing real and major harm to the business of the copyright holders and artists, it can ask the court to shut down the AI companies’ whole operation at the outset of the trial with an injunction.
Opening paragraphs of your book coming out of an LLM? That’s an intellectual issue to be discussed at length. Dollar-store “Call Me Maybe” generated on demand? Shut it down. I’m not saying it’s right, but it’s likely.
The predictable response from the companies has been that the system is not intended to replicate copyrighted works: a desperate, naked attempt to offload liability onto users under Section 230 safe harbor. That is, the same way Instagram isn’t liable if you use a copyrighted song to back your Reel. Here, the argument seems unlikely to gain traction, partly because of the aforementioned admissions that the company itself ignored copyright to begin with.
What will be the consequence of these lawsuits? As with all things AI, it’s quite impossible to say ahead of time, since there is little in the way of precedent or applicable, settled doctrine.
My prediction is that the companies will be forced to expose their training data and methods, these things being of clear evidentiary interest and, if this evidence shows that they are indeed misusing copyrighted material, we’ll see an attempt to settle or avoid trial, and/or a speedy judgment against Udio and Suno. It’s likely that at least one of the two will attempt to continue onward, using legal (or at least legal-adjacent) sources of music, but the resulting model would (by their own standards for training data) almost certainly result in a huge step down in quality, and users would flee.
Investors? Ideally, they’ll lose their shirts, having placed their bets on something that was in all likelihood illegal and certainly unethical, and not just in the eyes of nebbish author associations but according to the legal minds at the infamously and ruthlessly litigious RIAA.
The consequences may be far-reaching: If investors in a hot new generative media startup suddenly see a hundred million dollars vaporized due to the fundamental nature of generative media, suddenly a different level of diligence will seem appropriate.
Companies may learn from the trial or settlement documents what can be said — or perhaps more importantly, what should not be said — to avoid liability and keep copyright holders guessing.
Though this particular suit seems almost a foregone conclusion, it will not be a playbook to prosecuting or squeezing settlements out of other generative AI companies but an object lesson in hubris.
It’s good to have one of those every once in a while, even if the teacher happens to be the RIAA.