RightsDocketRightsDocket
Back to Insights
Creator GuideApr 29, 20269 min read

The $400M+ Mechanical Royalty Backlog and Why It Matters for AI-Assisted Musicians

U.S. songwriters and publishers are owed $400M+ in adjusted and unmatched mechanical royalties — money the system couldn't pay because metadata didn't match. Here's why AI-assisted music makes that problem larger, and what creators can document on their side. Not legal advice.

Freshness Check

Last reviewed Apr 29, 2026. Reviewed against The MLC's public reporting on Phono III adjustments and historical unmatched royalties, plus Billboard and Recording Academy coverage through April 2026.

Direct Answer

The Mechanical Licensing Collective identified roughly $419.2 million in underpaid streaming royalties for 2018–2022 (the "Phono III" adjustment), and previously received $424.4 million in historical "black-box" royalties from DSPs covering 2007–2020. The reason most of this money sat unpaid is the same in both cases: metadata gaps and ownership-data quality.

AI-assisted music is now adding new fragmentation — unclear contributor splits, synthesized voices, and undocumented tool use — on top of an already brittle matching system. The durable response for creators is to document human authorship evidence during the work. Not legal advice.

The two numbers the music industry already lives with

In 2025, The MLC reported it had identified $419.2 million in adjusted U.S. mechanical royalties owed to songwriters and publishers for streaming activity during the Phonorecords III period (2018–2022). After overpayment offsets, the net is closer to $390 million. Distributions began in May 2025.

That's separate from the $424.4 million in historical unmatched royalties The MLC received from digital service providers — money DSPs paid for streams between 2007 and 2020 where the songwriter or publisher couldn't be identified at the time.

Add the two and the U.S. mechanical royalty system has had close to $840 million sitting in a state of "the money exists, but the system can't pay the right person." The MLC has matched a meaningful share — current match rate is around 92% — but the residual is a structural feature, not a one-time anomaly.

How a stream becomes a payment — and where the chain breaks

The chain looks straightforward: a listener streams a song, the DSP reports the play, The MLC matches that play to a song and to ownership shares, and money flows to the people on the cap table. In practice, every step depends on metadata.

The match step is where the chain breaks. Without an ISWC, clean writer credits, and unambiguous splits, a play sits in an unmatched pool. Money still goes in. It just doesn't come out the right end.

This is a metadata problem, not a money problem. The DSPs paid. The system received. The only thing missing was a clean enough record of who wrote what to route the payment.

Why AI-assisted music makes the matching problem worse

Three new failure modes are arriving at scale. Splits get fuzzier. A traditional songwriting session produces a contributor list with names, roles, and percentages. An AI-assisted session may include co-writers, a producer who used Suno for a draft instrumental, a lyricist who iterated 40 prompts in Udio, and a vocalist whose performance was processed through an AI vocal model. Who owns what share of what?

Voice and likeness fragment performance ownership. Synthesized vocals, voice cloning of one's own voice, or a vocalist whose performance was AI-stylized — each introduces a question about whose performance is on the recording. That question can affect master royalties, neighboring rights, and right-of-publicity exposure all at once.

Tool disclosure is now a required input, not optional. DistroKid AI Credits, Spotify's AI Credits beta (April 16, 2026), and Apple Music Transparency Tags (March 2026) all moved AI disclosure into the upload flow. The DDEX-based standard behind them carries AI contribution flags downstream. Disclosing accurately requires knowing which tool did what, when. If the creator can't say, the disclosure is either missing or wrong.

"We'll figure out the splits later" is the most expensive sentence in music

The traditional industry version of this sentence created the unmatched pool that took the MLC and the DSPs years to address — and is still being worked through. The AI-assisted version is worse, because the ambiguity is upstream of the recording itself: it's about what the work contains, not just who is on the cap table.

The fix is procedural, not technical. Document during the work: a contributor list with named humans and percentages; a per-tool AI disclosure; the voice and likeness basis; pre-AI source files before they get overwritten; contemporaneous timestamps; and a release-readiness summary in plain English.

This record is what a future matching system will have to lean on. It is also what a USCO Limitation of Claim, a distributor AI disclosure, and a brand or sync reviewer's diligence already lean on.

Where RightsDocket fits — and where it doesn't

RightsDocket is the Human Authorship Evidence Platform for AI-Assisted Audio. It does not collect royalties, register works with the U.S. Copyright Office, file ISWCs, or replace a publisher. What it does is record what a track contains and produce a structured evidence record creators can use when their work meets the matching system, the disclosure system, or the filing system.

Start with the Free Rights Review — a diagnostic that looks at what was made, how AI was used, what evidence exists, and what the creator wants to do with the track. The review surfaces what is documented, what is missing, and which paid product fits next: Rights Receipt for an existing track, or Human Proof Pack for stronger pre-creation evidence capture.

Bottom line

Match rates depend on inputs. RightsDocket exists to make the inputs better at the source. Not legal advice.

About the Author

Abhi Basu

Abhi Basu

The RightsDocket editorial team covers music copyright, AI provenance, and legal documentation for creators and counsel. Guides are reviewed against current USCO guidance, distributor terms, and emerging AI copyright case law.

Frequently asked questions

What is the MLC?

The Mechanical Licensing Collective is the nonprofit organization established under the Music Modernization Act to administer blanket mechanical licenses for streaming services in the U.S. and to collect, match, and distribute mechanical royalties to songwriters and publishers.

What are black-box royalties?

Royalties paid by streaming services for music plays where the rights holder couldn't be identified at the time. The MLC received $424.4 million in such historical unmatched royalties covering 2007–2020 from DSPs, and has matched and distributed a substantial share since.

Is the $400M figure new money?

No. The roughly $419.2 million is an adjustment to underpaid streaming royalties for 2018–2022 (the Phono III period), identified after the Copyright Royalty Board's rate determination. The MLC began distributing it in May 2025.

How does AI-assisted music affect royalty matching?

By introducing new sources of metadata ambiguity: unclear contributor splits, synthesized or AI-stylized vocals, and per-tool disclosures that creators have to make at upload. Each adds room for the matching system to miss.

What can I actually do this week?

Start a contributor list, save your prompt history and pre-AI source files, and write a one-paragraph release-readiness summary for each AI-assisted track. Or run the Free Rights Review and let RightsDocket surface what is documented and what is missing.

Ready To Start

Create the project record before you export.

Sign in, document contributors and AI usage, and choose the paid product only when you are ready to export the structured evidence record.

Keep reading