Advocacy Group Urges OpenAI to Suspend Sora 2 Amid Deepfake Concerns

The group states the launch was “rushed” and that Sora 2 shows “a reckless disregard” for safety, personal likeness rights, and democratic integrity.

Advocacy Group Urges OpenAI to Suspend Sora 2 Amid Deepfake Concerns

The nonprofit Public Citizen has called on OpenAI to temporarily pull its video generation tool Sora 2, citing a surge in synthetic content that could undermine trust in what people see online.

In a letter dated November 11 to CEO Sam Altman, the group states the launch was “rushed” and that Sora 2 shows “a reckless disregard” for safety, personal likeness rights, and democratic integrity.

Public Citizen argues that while advanced video tools have existed, Sora 2’s ability to produce lifelike footage with minimal indicators of manipulation—often sharing only a small watermark—poses a significant risk.

The organisation warns that this may fuel disinformation campaigns and weaken the credibility of real visual evidence.

“The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm,”  J.B. Branch, a Big Tech accountability advocate at Public Citizen, said.

OpenAI responded by stating Sora videos include visible watermarks and metadata tagging, and that it has systems to block depictions of consented individuals.

“We have multiple guardrails intended to ensure that a living person’s likeness can’t be generated in Sora unless they’ve intentionally uploaded a cameo and given consent for it to be used,” the startup said.

Still, Public Citizen insists stronger safeguards are needed and has recommended that OpenAI pause the tool’s public deployment while working with external experts on ethical and technological frameworks to prevent misuse.