U.S. Federal Judges Blame AI Tools for Error-Filled Court Orders, Promise Stricter Oversight
A Perplexity spokesperson acknowledged that while the tool isn’t “100% accurate,” it is designed to be transparent and citation-based.
Two U.S. federal judges have admitted that their chambers’ use of generative AI tools led to error-filled court orders — including misquotes and references to unrelated parties — according to letters released by Senate Judiciary Chairman Chuck Grassley on Thursday.
Judges Henry T. Wingate of Mississippi and Julien Xavier Neals of New Jersey said that their law clerks used AI systems such as ChatGPT and Perplexity to draft judicial orders, which were prematurely entered into court dockets before review. Both judges have since withdrawn the flawed orders and outlined measures to prevent similar incidents.
Judge Neals confirmed reports that an intern used ChatGPT “without authorisation or disclosure,” violating both court and law school policies. He has now issued a written ban on AI-generated legal drafting until federal guidelines are established.
Judge Wingate said his clerk used Perplexity to summarise public case information and mistakenly filed a draft opinion that bypassed internal review. He has implemented new safeguards, requiring independent review of every draft and citation verification through Westlaw.
A Perplexity spokesperson acknowledged that while the tool isn’t “100% accurate,” it is designed to be transparent and citation-based.
Grassley praised the judges’ accountability but urged the judiciary to adopt “permanent AI policies” to ensure integrity in the courts. “We can’t allow overreliance on artificial assistance to upend factual accuracy,” he warned.
Comments ()