Tenable Uncovers “HackedGPT” Vulnerabilities in ChatGPT-4o and ChatGPT-5
While OpenAI has addressed some of the issues, several remained unpatched at the time of publication, leaving active exposure pathways.
Tenable has uncovered seven vulnerabilities impacting OpenAI’s ChatGPT-4o and ChatGPT-5, warning that the flaws — collectively dubbed HackedGPT — could enable attackers to secretly steal users’ personal data by bypassing built-in safety controls.
While OpenAI has addressed some of the issues, several remained unpatched at the time of publication, leaving active exposure pathways.
According to Tenable, these vulnerabilities show how attackers could extract stored chats, access memories, and manipulate AI behaviour without user awareness.
The flaws centre on a rapidly emerging threat class: indirect prompt injection. By embedding hidden instructions in webpages, comments, or links, attackers can trick the model into executing unauthorized actions.
Researchers found these attacks target ChatGPT’s web browsing and memory capabilities — features that process live online content and save long-term user data, creating new avenues for manipulation.
Tenable demonstrated two silent attack modes: “0-click” compromises, triggered simply by asking ChatGPT a question that leads it to a malicious webpage, and “1-click” attacks, where clicking a crafted link causes ChatGPT to execute hidden commands.
Even more alarming is Persistent Memory Injection, which lets attackers plant malicious instructions in ChatGPT’s long-term memory, enabling continued data leaks across future sessions until manually removed.
“HackedGPT exposes a fundamental weakness in how large language models judge what information to trust,” said Moshe Bernstein, Senior Research Engineer at Tenable. “Individually, these flaws seem small — but together they form a complete attack chain… AI systems aren’t just potential targets; they can be turned into attack tools.”
The seven vulnerabilities span indirect prompt injection through trusted sites, 0-click and 1-click attacks, safety bypass via wrapper URLs, cross-system conversation injection, malicious content hiding, and persistent memory exploitation. Potential impacts include theft of sensitive chat histories, exfiltration of data from connected services, and manipulation of responses to influence users.
Tenable says several issues still affect ChatGPT-5, and urges AI vendors to strengthen defences by isolating browsing, search, and memory functions and ensuring safety mechanisms work reliably. Security teams are advised to treat AI tools as active attack surfaces, monitor for manipulation, and enforce strong governance and data-classification controls.
“This research isn’t just about exposing flaws — it’s about changing how we secure AI. People and organisations alike need to assume that AI tools can be manipulated and design controls accordingly. That means governance, data safeguards, and continuous testing to make sure these systems work for us, not against us,” Bernstein added.
Comments ()