Researchers Demonstrate How ChatGPT Could Leak Private Emails Through MCP Tools
OpenAI recently rolled out full support for the Model Context Protocol (MCP).

A new security demonstration by Eito Miyamura, co-founder of cybersecurity startup Edisonwatch, has raised alarms about the risks posed by AI assistants with expanded tool access.
On Wednesday, OpenAI rolled out full support for the Model Context Protocol (MCP), a framework that allows ChatGPT to connect to external services such as Gmail, Google Calendar, SharePoint, and Notion. While designed to make the AI more useful, the integration could also open the door to novel attack methods.
According to Miyamura, all an attacker needs is a target’s email address. By sending a malicious calendar invite embedded with a jailbreak prompt, the attacker can hijack ChatGPT the moment a user asks it to review their calendar. Once compromised, the AI may follow the attacker’s hidden instructions—such as searching through private emails and forwarding sensitive data—without the user realizing.
We got ChatGPT to leak your private email data 💀💀
— Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025
All you need? The victim's email address. ⛓️💥🚩📧
On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2
Currently, OpenAI has limited MCP tools to developer mode, requiring manual approval for each session. But Miyamura warns that “decision fatigue is a real thing, and normal people will just trust the AI and click approve, approve, approve.”
"Remember that AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data," he said.
Comments ()