Anthropic’s Secretive ‘Mythos’ AI Tool Reportedly Accessed by Unauthorised Group
According to the report, a private online group accessed the system through a third-party vendor environment shortly after its announcement.
A group of unauthorised users has reportedly gained access to Mythos, a cybersecurity-focused AI tool recently introduced by Anthropic, Bloomberg reported.
Mythos, part of Anthropic’s broader push into enterprise security, is designed to help organisations detect and mitigate cyber threats. However, the company has previously warned that the tool could become a powerful hacking asset if misused.
According to the report, a private online group accessed the system through a third-party vendor environment shortly after its announcement.
“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson told TechCrunch.
The company added that it has found no evidence so far that its internal systems have been compromised.
The group reportedly used multiple strategies to gain access, including leveraging credentials associated with an individual linked to a contractor working with Anthropic.
Members of the group, said to be part of a Discord community focused on unreleased AI models, have been actively experimenting with Mythos and shared proof of access through screenshots and live demonstrations.
Bloomberg reported that the group may have identified the tool’s location by analysing patterns used in previous Anthropic model deployments. Despite gaining access, a source indicated the group was more interested in exploring the technology than exploiting it.
Mythos was released under a limited program called Project Glasswing to select partners, including Apple, with safeguards intended to prevent misuse.
Reportedly, OpenAI is finalising a new AI product with advanced cybersecurity capabilities, expected to be released to a small group of trusted partners.