Chinese Hacker Used Claude to Probe 30 Global Targets, From Big Tech to Governments
The incident, uncovered in September 2025, involved a suspected Chinese actor using Anthropic’s Claude and Claude Code tools to target around 30 global entities.
AI startup Anthropic has revealed what it describes as the first documented case of a state-sponsored cyber-espionage operation largely executed by an AI system.
The incident, uncovered in September 2025, involved a suspected Chinese actor using Anthropic’s Claude and Claude Code tools to target around 30 global entities, including tech companies, financial firms, chemical manufacturers and government bodies.
We disrupted a highly sophisticated AI-led espionage campaign.
— Anthropic (@AnthropicAI) November 13, 2025
The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.
“Our biggest concern is how widely this could scale and the implication that AI systems are now being used not just as advisors, but as autonomous operators,” said Anthropic’s Threat Intelligence team in the blog post.
The attackers manipulated the model’s agent-capabilities to execute reconnaissance, exploit generation, data exfiltration and network penetration with minimal human intervention.
Upon detection, Anthropic immediately flagged and terminated compromised accounts, notified relevant organisations and coordinated with law enforcement.
The company said it has enhanced its detection systems and will publish further threat intelligence reports. “We’re sharing this case publicly to help those in industry, government and the research community strengthen their defences,” the blog stated.
Founded in 2021 by former OpenAI researchers including Dario Amodei, Anthropic has built its reputation around developing large-language models like Claude and promoting safe AI development.
The episode underscores the growing security challenges posed by agentic AI systems—machines able to operate independently and continuously across tasks.
The revelation serves as a wake-up call for cybersecurity and AI stakeholders alike: as systems become more autonomous, protecting them from misuse becomes equally critical.
Comments ()