Stop Telling Your Chatbot Everything, Stanford Researchers Warn

The Stanford researchers reviewed 28 policy documents across the six companies and found pervasive gaps: long retention periods, weak explanations of how data is de-identified, and little clarity on whether humans review transcripts.

Stop Telling Your Chatbot Everything, Stanford Researchers Warn
(Photo- Freepik)

A new Stanford University study has raised alarms over how major AI companies handle user conversations, revealing that leading developers are quietly using chat data to train their models — often by default and with limited transparency.

The issue surfaced prominently last month when Anthropic updated its terms of service, stating that conversations with its chatbot Claude would automatically be used for training unless users opted out.

According to the Stanford report, Anthropic is far from alone. Six major U.S. AI developers — Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI — all feed user interactions back into their systems to sharpen capabilities and strengthen market position.

“Absolutely yes,” says lead author Jennifer King of Stanford’s Institute for Human-Centered AI, when asked whether users should be concerned. Sensitive personal or medical details shared during chats — even in uploaded files — may be collected and processed for training, she warns.

The Stanford researchers reviewed 28 policy documents across the six companies and found pervasive gaps: long retention periods, weak explanations of how data is de-identified, and little clarity on whether humans review transcripts.

In many cases, chat inputs also merge with data from other products, creating detailed behavioral profiles across search, commerce, and social platforms.

Children’s data is another red flag. Policies differ widely, with some companies explicitly collecting data from minors and others relying on unverifiable age declarations.

The researchers argue that current privacy practices echo the internet’s flawed legacy of dense, unreadable policies that users must accept to participate online. With millions now interacting daily with AI chatbots — and with no comprehensive U.S. federal privacy law — consumers face more exposure than ever.

The study calls for federal regulation, stronger opt-in requirements, and default filtering of personal data. “We need to weigh whether gains in AI capabilities are worth the considerable loss of consumer privacy,” King says.