Anthropic Accidentally Exposes Claude Code Source Files in Packaging Error
This resulted in the exposure of more than 500,000 lines of TypeScript code across nearly 2,000 files.
Anthropic has inadvertently exposed significant portions of the source code behind its Claude Code command-line tool after a packaging error led to sensitive files being included in a public npm release.
Claude Code is designed to let developers interact directly with Anthropic’s AI models from the terminal, enabling them to write, edit and debug code while automating development workflows. The tool functions as an AI-powered coding agent without requiring a full integrated development environment.
Claude code source code has been leaked via a map file in their npm registry!
— Chaofan Shou (@Fried_rice) March 31, 2026
Code: https://t.co/jBiMoOzt8G pic.twitter.com/rYo5hbvEj8
The issue emerged in version 2.1.88 of the npm package, where a source map file was mistakenly included. This resulted in the exposure of more than 500,000 lines of TypeScript code across nearly 2,000 files. The leaked material reportedly includes key components such as the system’s agent architecture, execution logic and integrations.
Anthropic acknowledged the incident, stating, “this was a release packaging issue caused by human error, not a security breach,” and that it is “rolling out measures to prevent this from happening again.”
While the company confirmed that no user data, prompts or customer information were compromised, the leak raises concerns around intellectual property and system transparency. Once such code is publicly released, it is difficult to fully contain, as copies can quickly spread across external platforms.
This is either brilliant or scary:
— Gergely Orosz (@GergelyOrosz) March 31, 2026
Anthropic accidentally leaked the TS source code of Claude Code (which is closed source). Repos sharing the source are taken down with DMCA.
BUT this repo rewrote the code using Python, and so it violates no copyright & cannot be taken down! pic.twitter.com/uSrCDgGCAZ
Experts note that access to internal code can offer insights into how AI agents manage workflows, permissions and tool usage, potentially exposing weaknesses or enabling more targeted attacks. It may also provide competitors with a clearer view of Anthropic’s product architecture.
From the massive Anthropic leak of their entire Claude Code.
— Rohan Paul (@rohanpaul_ai) March 31, 2026
The "Undercover Mode" is so interesting.
Its a safety system that kicks in automatically whenever Claude Code is used to contribute code to public or open-source repositories (GitHub PRs, commits, etc.).
The goal is… https://t.co/V1xBofUSl5 pic.twitter.com/2Kmdkzyetq
The incident follows earlier reports of internal documents related to Anthropic’s upcoming AI model being found in a publicly accessible cache, highlighting ongoing challenges in safeguarding sensitive AI assets.