Operant AI Discovers 'Shadow Escape' — A Zero-Click Agentic AI Attack Exploiting Model Context Protocol

The flaw enables silent data exfiltration across popular AI platforms, including ChatGPT, Claude, Gemini, and other MCP-connected assistants.

Operant AI Discovers 'Shadow Escape' — A Zero-Click Agentic AI Attack Exploiting Model Context Protocol
(Image-Freepik)

Operant AI has disclosed the discovery of “Shadow Escape”, the first known zero-click agentic AI attack exploiting vulnerabilities in the Model Context Protocol (MCP).

The flaw enables silent data exfiltration across popular AI platforms, including ChatGPT, Claude, Gemini, and other MCP-connected assistants.

The attack allows malicious actors to steal sensitive personal and financial information — such as Social Security numbers, medical records, and transaction data — without user interaction or detection.

Operant AI described Shadow Escape as a new class of AI-native threat that operates entirely within authorised enterprise environments, bypassing traditional cybersecurity defenses.

“Securing MCP and agentic identities is absolutely critical,” said Donna Dodson, former Chief of Cybersecurity at NIST. “Operant AI’s ability to detect and block these attacks in real time is pivotal for industries under strict compliance standards.”

The attack unfolds in three stages — infiltration through hidden instructions in legitimate documents, discovery of sensitive data via MCP’s system access, and silent exfiltration disguised as analytics traffic. Unlike prompt injection or phishing, it requires no user clicks or visible actions.

Operant AI has reported the vulnerability to OpenAI and initiated the CVE designation process, emphasising that the issue is protocol-level — not tied to any single AI vendor.

"While MCP has become a foundational protocol enabling powerful AI integrations, our research reveals that standard MCP configurations create unprecedented attack surfaces that operate beyond the reach of traditional security controls.

"Shadow Escape demonstrates how AI agents can be weaponized through 0-click attacks that are invisible to both users and conventional security methods. The attack happens entirely within authenticated sessions, using legitimate credentials, making the blast radius potentially catastrophic given the scale and speed at which agents can operate,"  said Vrajesh Bhavsar, CEO and co-founder of Operant AI.

With 80% of enterprises using agentic AI, Operant AI estimates that trillions of private records could be at risk. The startup urged organisations to audit MCP-based AI systems, enforce least-privilege access, and deploy runtime AI defense guardrails immediately