CloudSEK Uncovers AI Summarizer Exploit Turning Trusted Tools into Ransomware Delivery Channels
By embedding malicious payloads in HTML using CSS-based obfuscation attackers can trick AI summarizers into reproducing harmful, step-by-step ransomware instructions.

CloudSEK’s latest cybersecurity research has revealed a dangerous new threat vector: the misuse of AI summarisation tools as unintentional ransomware delivery mechanisms.
The report, titled “Trusted My Summarizer, Now My Fridge Is Encrypted,” highlights how attackers are weaponizing invisible prompt injection and prompt overdose techniques to manipulate AI tools embedded in email clients, browsers, and enterprise apps.
By embedding malicious payloads in HTML using CSS-based obfuscation—like white-on-white text, zero-width characters, or off-screen rendering—attackers can trick AI summarizers into reproducing harmful, step-by-step ransomware instructions. These instructions often appear trustworthy, increasing the chances that non-technical users will follow them.
CloudSEK’s proof-of-concept shows AI summarizers echoing Base64-encoded PowerShell commands capable of simulating ransomware deployment. This vulnerability could lead to widespread social engineering attacks via email previews, search snippets, blogs, and browser extensions.
“What makes this discovery so alarming is the scale and magnitude of the threat. AI summarizers are now embedded in email clients, search engines, collaboration tools, and enterprise workflows — touching millions of users daily. If even a fraction of that ecosystem is poisoned with hidden instructions, the impact could be catastrophic. We’re talking about a delivery channel that can amplify ransomware lures globally, lower the barrier for execution, and evade traditional security controls because the malicious steps appear to originate from a trusted AI assistant. It’s not just a technical exploit — it’s a social engineering superweapon that could redefine how cybercriminals launch and scale their campaigns,” said Dharani Sanjaiy, Researcher at CloudSEK
The implications are severe: enterprises risk exposing internal copilots and summarizers to poisoned content, potentially causing operational and reputational damage. CloudSEK warns that this AI-driven exploit could massively scale ransomware distribution and lower the technical barrier for attackers.
Mitigation strategies include client-side sanitisation, prompt filtering, encoded payload detection, enterprise AI policy enforcement, and user education on AI-generated content.