Anthropic Lawyer Admits to Using Fake Citation Generated by Claude AI in Copyright Lawsuit

Anthropic attributed the mistake to “an honest citation error,” emphasising it was not an intentional fabrication

Anthropic Lawyer Admits to Using Fake Citation Generated by Claude AI in Copyright Lawsuit
(Downloaded from Freepik)

In a legal filing submitted Thursday in a Northern California court, a lawyer representing Anthropic admitted to citing a fabricated source generated by the company’s AI chatbot, Claude, in its ongoing legal dispute with music publishers, Bloomberg reported.

According to the filing, Claude produced a citation with an incorrect title and authors.

Anthropic attributed the mistake to “an honest citation error,” emphasising it was not an intentional fabrication and acknowledging that their manual citation check failed to catch several similar inaccuracies.

The admission comes after lawyers for Universal Music Group and other publishers accused Anthropic’s expert witness, employee Olivia Chen, of referencing non-existent articles in her testimony.

Judge Susan van Keulen subsequently ordered Anthropic to address the claims.

This incident underscores rising concerns about the reliability of AI, not only in legal settings but also in other critical domains such as healthcare and education.

It also raises the question of the billion dollars of investment being poured into generative AI.

Last month, OpenAI, a rival startup of Anthropic, admitted that their newer models– o3 and o4 mini– hallucinate more often than older reasoning models like o1, o1-mini, and o3-mini, as well as traditional models such as GPT-4.