A Setback for Creators: Why Recent AI Copyright Rulings Threaten Artistic Rights

It is the first time U.S. courts have endorsed using copyrighted literary works to train large language models (LLMs).

A Setback for Creators: Why Recent AI Copyright Rulings Threaten Artistic Rights
(Image- Freepik)

Two major court rulings this week—favoring Anthropic and Meta—represent a worrying shift in how U.S. courts interpret fair use in the context of AI training. While hailed by tech companies as transformative victories, these decisions may mark a troubling defeat for artists and authors.

In Bartz v. Anthropic, U.S. District Judge William Alsup ruled that Anthropic’s use of copyrighted books to train its Claude model qualified as “fair use,” describing the practice as “exceedingly transformative”.

It is the first time U.S. courts have endorsed using copyrighted literary works to train large language models (LLMs). However, Alsup also found that Anthropic had downloaded millions of pirated books, tasks which fell outside fair use protections, leaving those issues to be resolved in a later trial.

While Anthropic may eventually lose on the piracy issue, the core concern for artists isn’t just about pirated material—it's about their work being used to train AI models that generate billions in revenue, without acknowledgment or compensation to the original creators.

Just days later, in Kadrey v. Meta, U.S. District Judge Vince Chhabria dismissed a lawsuit brought by thirteen authors—including Sarah Silverman—who claimed that Meta’s training of copyrighted books for its Llama model violated their rights. Chhabria ruled the plaintiffs did not sufficiently demonstrate economic harm and approved use under fair use protections.

( Sarah Silverman, Image-BBC)

Taken together, these decisions widen the perceived legal leeway for AI firms, seemingly green‑lighting the use of copyrighted text during training. Both rulings hinge on the central concept of fair use and especially the criterion of “transformativeness” — that AI systems create something new, not merely reproducing the original.

From the vantage of artists, however, this represents a bittersweet outcome at best. These creators—writers, musicians, illustrators—invest time, emotion, and skill into their work.

Their livelihoods depend on control and compensation for reuse. If LLMs can use their creations freely during training, pumped with vast troves of copyrighted text, artists risk being sidelined.

These rulings could influence pending lawsuits—like those against OpenAI, Stability AI, and Midjourney—cementing a legal precedent that favors AI over art without addressing compensation frameworks.

Earlier this year, OpenAI argued before the Delhi High Court that copyright protection in news reporting is limited due to the overriding public interest in the free flow of information.

The statement came in response to a copyright infringement suit filed by news agency Asian News International (ANI).

Representing OpenAI, Senior Advocate Amit Sibal argued that while news may contain similar facts, ideas, or names, copyright law only protects the unique expression of those elements—not the facts themselves.

He further emphasised that in the case of news, copyright protection is particularly narrow, as news content is meant to be widely disseminated in the public interest.

Unless Congress updates copyright law for the digital age, or courts demand licensing as a condition of fair use, artists may continue to lose ground.

Silverlining

Amidst all the developments, there is also a silver lining. Chhabria emphasized that the authors failed to demonstrate “market harm,” he did note that even if LLM training is considered highly transformative, it’s difficult to justify the fairness of using copyrighted books to build tools that generate billions—or even trillions—in profit, while also producing an endless flow of AI-generated content that could severely damage the market for the original books.

The Copyright Office report yesterday affirms what our members - and the broader creative community - have long contended: that the wholesale ingestion of copyrighted works to train gen AI models without consent or compensation is NOT fair use under current law," Jason Kint, CEO of Digital Content Next, said.