5 mins read

Anthropic Claude Source Code Leak: What the Code Exposes and What It Hides

Anthropic Claude User Interface prompt bar with categories like Write, Learn, Code, and Life stuff.
Anthropic Claude User Interface prompt bar with categories like Write, Learn, Code, and Life stuff.

Anthropic confirmed on Tuesday that its popular AI coding assistant Claude Code accidentally shipped with its full internal source code attached. The exposure โ€” spanning more than 512,000 lines across nearly 2,000 files โ€” occurred when the company published version 2.1.88 to the public npm registry with a source map file still included. No customer data or credentials were involved, but the code itself spread rapidly before Anthropic could contain it.

How the Anthropic Claude Source Code Leak Happened

Source map files are developer tools used to connect bundled, minified code back to the original source. When Anthropic published the 2.1.88 update to Claude Code โ€” a tool it launched in February 2025 โ€” the npm package contained a reference to an unobfuscated TypeScript source, as reported earlier by Ars Technica. A user on X identified the leak first and shared the file publicly.

Anthropic spokesperson Christopher Nulty stated: “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed.” The company did not immediately clarify how the map file was included in a production release.

By Wednesday morning, Anthropic had filed copyright takedown requests forcing the removal of more than 8,000 copies and adaptations of the raw source that developers had shared on GitHub, according to the WSJ. Despite those efforts, copies persisted across Reddit, X, Discord, and forked repositories.

What the Code Reveals โ€” and What It Does Not

Developers who examined the leaked TypeScript source reported finding references to several unreleased features. Among them: a Tamagotchi-style virtual “pet” companion and an always-on background agent designed to perform tasks autonomously on a user’s behalf. Claude Code had already been adding agentic capabilities before the leak, but these additions suggest the roadmap extends further than publicly disclosed.

The code also contained an “Undercover Mode” subsystem โ€” an internal mechanism built to prevent Claude Code from accidentally leaking Anthropic’s own internal codenames and project names when the AI contributes to open-source repositories. The existence of such a subsystem reveals how seriously Anthropic treats the risk of self-exposure through its own products.

One internal comment attributed to an Anthropic coder acknowledged a real engineering trade-off: the coder wrote that a memoization approach “increases complexity by a lot” and expressed uncertainty about whether it actually improves performance. That kind of candor is rarely visible from outside a company’s codebase.

However, the leak does not give anyone a working, deployable version of Claude Code. The exposed TypeScript source is architectural insight, not a turnkey product. CNET noted the leak gives developers a clearer view of how the tool was built โ€” but building a competitor still requires substantial independent work.

Community Forks, Rust Ports, and the Limits of Open Reverse-Engineering

A GitHub repository called claw-code, maintained by GitHub user instructkr, became one of the fastest-growing responses to the leak. It reportedly surpassed 50,000 stars on GitHub within two hours โ€” a pace that positioned it as the fastest repository in history to reach that milestone. The repo accumulated 50,000 forks as well.

The project, built on earlier work by bellman_ych including oh-my-codex (OmX) and oh-my-opencode (OmO), is aimed at harness engineering research and includes use cases like code review and persistent execution loops. Its community โ€” oriented around LLMs, harness engineering, and agent workflows โ€” coordinates largely through Discord.

A dev/rust branch is underway, targeting a faster and more memory-safe harness runtime. The Rust port includes an API client with provider abstraction, OAuth, and streaming support; a runtime covering session state, compaction, MCP orchestration, and prompt construction; a tools crate for manifest definitions and execution; a commands crate for slash commands, skills discovery, and config inspection; a plugins crate for hook pipelines; a compatibility harness for upstream editor integration; and a full Claw CLI with interactive REPL, markdown rendering, and project bootstrap flows.

The Python port, by contrast, mirrors the archived root-entry file surface and top-level subsystem names โ€” including command and tool inventories up to a limit of 10 inspected items and up to 16 listed Python modules โ€” but is explicitly described as not yet a full runtime-equivalent replacement for the original TypeScript system. The Current Parity Checkpoint section of the repository acknowledges that the Python tree still contains fewer executable runtime slices than the archived source.

The repository includes an ownership disclaimer: it does not claim ownership of the original Claw Code source material and is not affiliated with Anthropic or the original authors. That disclaimer matters legally, given Anthropic’s active use of copyright takedowns.

Security Risks, Competitive Exposure, and What Comes Next

Arun Chandrasekaran, an AI analyst at Gartner, told The Verge that the leak carries real security risk, including “risks such as providing bad actors with possible outlets to bypass guardrails.” Understanding an AI tool’s internal architecture can surface edge cases that bad actors could attempt to exploit โ€” a concern that grows as agentic capabilities expand.

Beyond security, the competitive exposure is meaningful. According to CNBC, the leak could help software developers and Anthropic’s rivals understand exactly how the company built what has become a widely adopted coding tool. Claude Code had already gained traction after picking up more steam following the release of Claude Opus 4.6 in early 2026, and its integration with Cowork for computer control marked another product expansion.

Gergely Orosz, founder of The Pragmatic Engineer newsletter, captured the ambiguous public reaction in a post on X: “This is either brilliant or…” โ€” leaving the assessment pointedly unfinished. The remark reflects wider uncertainty about whether the exposure, however accidental, might ultimately accelerate or damage trust in AI coding tools as a category.

Key questions remain unanswered. Anthropic has not disclosed what process change will prevent a source map from shipping in future npm packages. The fate of the claw-code repository โ€” and whether additional copyright actions will reach it โ€” is unresolved. It is also unclear whether the Tamagotchi-style pet or the always-on agent will ship on the timeline the leaked code implies, or whether internal plans will shift in response to the exposure. As the community focused on LLMs, harness engineering, and agent workflows continues to pick apart the archive, the gap between what Anthropic intended to keep private and what is now permanently public will be difficult to close.

FAQ – Frequently Asked Questions

Will Anthropic be releasing the virtual “pet” companion feature to the public?

While the leaked code suggests that Anthropic was developing a Tamagotchi-style virtual pet companion, there’s no official word on its release. Insiders indicate that the feature is still in the experimental phase and may undergo significant changes before its potential release. The feature’s development status is currently uncertain.

How will the exposed source code impact Claude Code’s security and future development?

The exposure of Claude Code’s internal source code may lead to a more rapid identification of potential security vulnerabilities by the developer community. However, Anthropic’s prompt response in filing copyright takedown requests has likely mitigated some of the risks. The company is expected to incorporate community feedback and security patches into future updates.

Can developers use the leaked code to create a fully functional clone of Claude Code?

While the leaked TypeScript source provides valuable architectural insights, creating a fully functional clone of Claude Code would still require significant development efforts. The exposed code lacks essential components, such as trained models and proprietary data, making it difficult to replicate the tool’s functionality without substantial additional work.

Laszlo Szabo / NowadAIs

Laszlo Szabo is an AI technology analyst with 6+ years covering artificial intelligence developments. Specializing in large language models, ML benchmarking, and Artificial Intelligence industry analysis

Categories

Follow us on Facebook!

A detailed illustration depicting the complex relationship between Americans and artificial intelligence. In the foreground, a map of the United States is covered in reaching hands attempting to grasp glowing AI icons, representing rapid adoption. A central figure holds a transparent tablet displaying 'AI', while looking apprehensively toward a crumbling stone wall in the background. On the wall, the message 'OUR DEEP MISTRUST' is prominently carved. A large mural to the right shows a crowd being forcibly integrated by a large robot. This visual juxtaposition illustrates that while technology spreads quickly, many Americans lack trust in AI. The scene includes detailed elements like a large clock and old books.
Previous Story

Americans lack trust in AI even as adoption climbs fast

A minimalist graphic showing a light blue waveform or pulse icon on a dark blue background, representing the digital interface of the Utah AI psychiatric drug prescription pilot program.
Next Story

Utah AI Psychiatric Drug Prescription Pilot: Who It Cannot Reach

Latest from Blog

Go toTop