Researchers Break Microsoft Copilot Studio, Show How “No-Code” AI Agents Can Commit Fraud Instantly

The company warned that while organisations are rapidly adopting tools that let employees build their own AI agents, this democratisation of AI is creating “severe, yet overlooked” security risks.

Researchers Break Microsoft Copilot Studio, Show How “No-Code” AI Agents Can Commit Fraud Instantly
(Photo-Freepik)

Cybersecurity firm Tenable has released new research revealing that Microsoft Copilot Studio can be successfully jailbroken, raising fresh concerns over the growing use of no-code AI platforms inside enterprises.

The company warned that while organisations are rapidly adopting tools that let employees build their own AI agents, this democratisation of AI is creating “severe, yet overlooked” security risks.

To demonstrate how easily these agents can be manipulated, Tenable Research built a travel booking agent in Copilot Studio. The AI assistant was designed to manage reservations end-to-end and was given demo customer data, including names, contact information and credit card details. It also received strict instructions to verify identity before sharing or modifying any information.

Using a prompt injection attack, researchers hijacked the agent’s workflow, extracting sensitive credit card data and booking a free vacation. The implications, Tenable said, include potential data breaches, regulatory consequences and financial fraud.

The agent was coerced into bypassing identity checks to leak payment card details and was manipulated into changing a trip price to zero because it had broad edit permissions.

“AI agent builders, like Copilot Studio, democratise the ability to build powerful tools, but they also democratise the ability to execute financial fraud, thereby creating significant security risks without even knowing it,” said Keren Katz, Senior Group Manager of AI Security Product and Research at Tenable. “That power can easily turn into a real, tangible security risk.”

Tenable urged organisations to implement stricter governance, limit permissions and monitor agent behaviour to prevent data leakage and misuse.