Sam Altman Slams NYT Over ChatGPT User Privacy in Ongoing Legal Battle
The clash highlights broader tensions around data governance, user consent, and the boundaries of discovery in AI-related litigation

In a sharp rebuke posted on X, OpenAI CEO Sam Altman criticised The New York Times for demanding that the company retain ChatGPT user conversations—regardless of user consent—as part of its ongoing copyright lawsuit.
“AI privacy is critically important as users rely on AI more and more,” Altman wrote. “But [The Times] continue to ask a court to make us retain ChatGPT users' conversations when a user doesn't want us to. This is not just unconscionable, but also overreaching and unnecessary to the case.”
The lawsuit, filed in December 2023, accuses OpenAI and its partner Microsoft of using The Times’ copyrighted content without permission to train its AI models. As part of discovery, The Times has sought access to specific user prompts and ChatGPT outputs—requests that Altman claims compromise user trust and data privacy.
Altman also floated the idea of an “AI privilege”—a legal framework akin to attorney-client confidentiality—that would shield conversations between users and AI systems.
The clash highlights broader tensions around data governance, user consent, and the boundaries of discovery in AI-related litigation. OpenAI has vowed to fight what it sees as an unprecedented legal intrusion into user privacy.
Earlier this week, two major rulings this week favored Anthropic and Meta, with U.S. judges deeming their use of copyrighted books for AI training as “fair use.”
In Bartz v. Anthropic, Judge Alsup called the use “exceedingly transformative,” while in Kadrey v. Meta, Judge Chhabria dismissed the authors’ claims. Creators warn these decisions could undermine artistic rights and fair compensation.