OpenAI Report Says Chinese Law Enforcement Used ChatGPT to Expose Global Harassment Campaign
The revelation sheds light on coordinated efforts to track and silence regime critics both inside China and abroad.
OpenAI’s latest threat intelligence report reveals that a Chinese law enforcement official inadvertently exposed details of a large-scale online harassment and influence operation targeting critics, dissidents and even international figures by uploading internal reports to ChatGPT.
The revelation sheds light on coordinated efforts to track and silence regime critics both inside China and abroad.
The account, now banned by OpenAI, used ChatGPT to review and edit descriptions of what it termed “cyber special operations,” which included harassment campaigns against Chinese dissidents and critics of the Chinese Communist Party.
The uploaded documents suggested a “large-scale, resource-intensive and sustained” operation involving hundreds of staff members and thousands of fake social media accounts.
These campaigns reportedly used tactics such as flooding platforms with false reports against dissidents, forging documents and impersonating U.S. officials to intimidate or suppress criticism. One alleged effort involved planning a propaganda campaign against Japanese Prime Minister Sanae Takaichi after she criticized China’s human rights record.
While OpenAI noted evidence only of a single account tied to the agency, it characterises the documented activities as part of a broader, industrial-scale digital influence strategy.
The company emphasised that ChatGPT was not used to carry out most of the campaign’s online content distribution, but instead served as a planning and review tool, with other AI models and fake accounts executing the tactics described.
The disclosure highlights growing concerns about how AI platforms can intersect with state-linked information operations, prompting renewed focus on misuse prevention and digital authoritarianism.