Anthropic Accidentally Exposes Claude Code Source Files in Packaging Error

This resulted in the exposure of more than 500,000 lines of TypeScript code across nearly 2,000 files.

Anthropic Accidentally Exposes Claude Code Source Files in Packaging Error

Anthropic has inadvertently exposed significant portions of the source code behind its Claude Code command-line tool after a packaging error led to sensitive files being included in a public npm release.

Claude Code is designed to let developers interact directly with Anthropic’s AI models from the terminal, enabling them to write, edit and debug code while automating development workflows. The tool functions as an AI-powered coding agent without requiring a full integrated development environment.

The issue emerged in version 2.1.88 of the npm package, where a source map file was mistakenly included. This resulted in the exposure of more than 500,000 lines of TypeScript code across nearly 2,000 files. The leaked material reportedly includes key components such as the system’s agent architecture, execution logic and integrations.

Anthropic acknowledged the incident, stating, “this was a release packaging issue caused by human error, not a security breach,” and that it is “rolling out measures to prevent this from happening again.”

While the company confirmed that no user data, prompts or customer information were compromised, the leak raises concerns around intellectual property and system transparency. Once such code is publicly released, it is difficult to fully contain, as copies can quickly spread across external platforms.

Experts note that access to internal code can offer insights into how AI agents manage workflows, permissions and tool usage, potentially exposing weaknesses or enabling more targeted attacks. It may also provide competitors with a clearer view of Anthropic’s product architecture.

The incident follows earlier reports of internal documents related to Anthropic’s upcoming AI model being found in a publicly accessible cache, highlighting ongoing challenges in safeguarding sensitive AI assets.