On March 31, 2026, Anthropic—the company that markets itself as the safety-first AI lab—accidentally published the complete source code of Claude Code, its flagship AI coding tool, to the public npm registry. A missing entry in a configuration file shipped a 59.8 MB source map containing over 512,000 lines of unobfuscated TypeScript across roughly 1,900 files.

Within hours, the code had been mirrored, analyzed, and dissected by thousands of developers and security researchers worldwide. A clean-room rewrite hit 50,000 GitHub stars in two hours—likely the fastest-growing repository in the platform's history.

For the tens of thousands of businesses now using AI coding tools in their development workflows, the incident raises questions that go well beyond one company's packaging mistake.

What Happened

Claude Code is built on Bun, the JavaScript runtime Anthropic acquired in late 2025. Bun generates source maps by default—debugging files that map compiled code back to its original source. Someone on the release team failed to add *.map to .npmignore or configure the files field in package.json to exclude these artifacts.

The result: version 2.1.88 of @anthropic-ai/claude-code shipped with a source map that exposed the entire original TypeScript codebase. Anyone who ran npm install received the full source alongside the compiled package.

Anthropic confirmed the incident the same day: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach."

The timing was particularly unfortunate. Just days earlier, Fortune reported that Anthropic had inadvertently made nearly 3,000 internal files publicly accessible—including a draft blog post detailing a powerful upcoming model known internally as "Mythos." Two operational security failures in a single week from a company whose brand is built on responsible AI development.

What the Source Code Revealed

The leaked code exposed far more than implementation details. Researchers quickly identified 44 hidden feature flags, unreleased capabilities, and internal practices that Anthropic likely never intended to make public.

KAIROS: An Always-On Background Agent

Referenced over 150 times in the source code, KAIROS—named after the Ancient Greek concept of "the opportune moment"—is a fully built but unshipped autonomous daemon mode. It allows Claude Code to operate as a persistent background agent: fixing errors, running tasks, and sending push notifications without waiting for human input.

The existence of KAIROS signals where AI coding tools are headed—toward always-on, autonomous agents operating continuously in the background of development environments. For businesses, this raises fundamental questions about oversight, control, and the attack surface these tools create. We explored these dynamics in detail in our coverage of AI coding agent security risks.

Undercover Mode

One of the more unusual discoveries was a feature called "Undercover Mode." The system prompt explicitly instructs the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

The feature activates for Anthropic employees working on non-internal repositories, stripping Co-Authored-By attribution, forbidding mentions of internal details in commits, and preventing references to unreleased models. In practical terms, Anthropic was using Claude Code to contribute to public open-source projects while concealing that the contributions came from an AI tool.

For businesses evaluating AI tools and vendor transparency, this is worth noting. If the vendor building your AI coding assistant has a built-in feature to hide its own AI-generated contributions, it's reasonable to ask what other decisions are being made behind the scenes.

Internal Model Codenames

The leak confirmed that "Capybara" is the internal codename for a Claude 4.6 variant, with "Fennec" mapping to Opus 4.6 and an unreleased model called "Numbat" still in testing. Combined with the Mythos leak days earlier, Anthropic's competitors now have a detailed view of the company's product roadmap.

Frustration Detection and Usage Patterns

The source code revealed internal telemetry showing that 1,279 sessions had experienced 50 or more consecutive failures, collectively wasting approximately 250,000 API calls per day globally. A fix limiting consecutive failures to three was implemented—raising questions about how much visibility AI tool vendors have into developer workflows and how that data is used.

The Immediate Security Fallout

The leak itself was embarrassing. What followed was dangerous.

Trojanized Source Code Repositories

Within 48 hours, threat actors had created fake GitHub repositories purporting to offer the leaked Claude Code source code. Multiple trojanized repositories—some accumulating hundreds of forks and stars before detection—were distributing Vidar information-stealing malware and the GhostSocks proxy backdoor to developers who downloaded the code. Anthropic has since filed DMCA takedown notices against over 8,100 repositories, but mirrors continue to surface. This is a pattern we've seen repeatedly: attackers exploit curiosity around high-profile incidents to distribute malware.

npm Typosquatting Campaign

Attackers also capitalized on the leak to stage dependency confusion attacks. An npm user published packages with names matching internal Claude Code dependencies—audio-capture-napi, color-diff-napi, image-processor-napi, and others—targeting developers attempting to compile the leaked source. This technique exploits the way package managers resolve dependencies, tricking build systems into pulling malicious code instead of legitimate internal packages.

Concurrent Axios Supply Chain Attack

In what may or may not be coincidence, a separate supply chain attack hit the axios npm package—one of the most widely-used JavaScript libraries in the world—during the same window. As we covered in our detailed analysis of the axios supply chain attack, malicious versions containing a Remote Access Trojan were published between 00:21 and 03:29 UTC on March 31, 2026. Google's Threat Intelligence Group attributed the attack to UNC1069, a financially motivated North Korean threat actor.

Organizations that installed or updated Claude Code via npm during that window may have also pulled compromised dependencies. The overlap between these two incidents amplified the blast radius of both.

What This Means for Businesses

The Claude Code leak isn't just a story about one company's mistake. It's a case study in the risks that come with building critical development infrastructure on rapidly evolving AI tools.

Your AI Coding Tool Is Now Part of Your Attack Surface

AI coding agents like Claude Code, Cursor, and GitHub Copilot have more access to your systems than most employees. They can read files, execute commands, access credentials, and connect to external services. When those tools have their internals exposed, attackers gain a detailed blueprint for crafting targeted exploits.

The leaked source code revealed the exact orchestration logic for Claude Code's Hooks and MCP (Model Context Protocol) server integrations. Attackers can now design malicious repositories specifically tailored to trick Claude Code into executing harmful commands—a risk we examined in our piece on why AI coding assistants could be your biggest security blind spot.

Supply Chain Risk Extends to Your Development Tools

Most businesses have invested in securing their production software supply chain. Far fewer have applied the same rigor to their development toolchain. Claude Code is distributed through npm, which means it's subject to the same supply chain risks as any other package in your dependency tree.

The typosquatting campaign that followed the leak demonstrates how quickly attackers can weaponize a high-profile incident. If your developers are installing AI tools from public package registries without verification processes, you're accepting supply chain risk that most security frameworks would flag in a production context.

Vendor Operational Security Matters

Anthropic positions itself as the most safety-conscious AI company in the industry. Two significant data exposures in a single week—the Mythos model leak and the Claude Code source code leak—undermine that positioning.

This isn't unique to Anthropic. As we discussed in our article on third-party vendor risk, any vendor's security posture becomes relevant to yours the moment you integrate their tools into your workflow. When that vendor has access to your source code, your development environment, and potentially your production systems through AI-powered automation, the stakes are higher.

Questions businesses should be asking their AI tool vendors:

  • What data do you collect from our development environments? The frustration detection telemetry found in the Claude Code source suggests these tools have significant visibility into developer workflows.
  • How do you secure your release pipeline? A missing .npmignore entry shouldn't be able to expose 512,000 lines of source code. What other safeguards exist—or don't?
  • What happens when your tool is compromised? Do you have a defined incident response process for supply chain attacks targeting your distribution channels?
  • What autonomous capabilities are you developing? Features like KAIROS represent a fundamental shift in how these tools operate. Businesses deserve transparency about what capabilities exist, even if they're not yet enabled.

The Shadow AI Dimension

Many organizations don't have full visibility into which AI tools their developers are using—or how those tools are configured. As we explored in our coverage of shadow AI, unauthorized AI tool usage creates security blind spots that traditional IT governance doesn't address.

The Claude Code leak makes this more urgent. If developers on your team installed a compromised version—or downloaded trojanized source code from a fake GitHub repository—would your security team know? Would your incident response process detect it?

The Broader Pattern

This incident fits into an accelerating trend. Software supply chain attacks more than doubled globally during 2025, with roughly 30% of all data breaches now linked to third-party or supply chain issues. We've tracked this escalation across multiple incidents—from the SolarWinds breach to the Notepad++ compromise to the axios attack.

What's changing is the target. Attackers are increasingly focusing on development tools and infrastructure—the systems developers trust implicitly because they use them every day. AI coding agents, with their broad system access and rapid adoption, represent an especially attractive target.

The emergence of vibe coding—where non-technical users build software using AI tools—compounds the risk. More people are using these tools, often with less security awareness and fewer guardrails than professional development teams would have in place.

What Organizations Should Do Now

Whether or not your organization uses Claude Code specifically, the incident highlights action items that apply to any business using AI development tools:

Audit your AI tool inventory. Know which AI coding tools your developers are using, how they're installed, and what access they have. If you don't have an AI usage policy, this is the moment to create one.

Lock dependency versions. Don't allow automatic updates for critical development tools. Pin versions, verify checksums, and review changelogs before upgrading. Had organizations pinned Claude Code to version 2.1.87, they would have avoided the compromised release entirely.

Monitor for indicators of compromise. If your team installed or updated Claude Code via npm on March 31, 2026, verify the integrity of the installation. Check for unexpected packages in your node_modules directory, particularly the typosquatted package names identified by security researchers.

Treat development environments as production-adjacent. Developer machines typically have access to source code repositories, cloud credentials, API keys, and internal systems. A compromised development tool can provide the same level of access as a compromised production server. Apply attack surface reduction principles accordingly.

Review vendor security practices. Ask your AI tool vendors about their release pipeline security, incident response capabilities, and data collection practices. The questions matter more than the answers—vendors that can't articulate their security posture probably haven't formalized one.

Have an incident response plan. As we've discussed in our piece on incident response planning, the time to develop a response process is before something happens—not after. Supply chain incidents involving development tools require specific playbooks that many organizations haven't written yet.

Looking Forward

AI coding tools aren't going away. Their productivity benefits are real, and adoption will continue to accelerate. But the Claude Code leak demonstrates that these tools introduce risks that most organizations haven't fully accounted for.

The challenge isn't choosing between productivity and security. It's building the governance, oversight, and response capabilities that allow your organization to use these powerful tools without creating unmanaged exposure.

That's a challenge we help businesses navigate every day—from defending against AI-powered threats to building the security foundations that make innovation possible.


This article is intended for informational purposes only and does not constitute professional security, legal, or compliance advice. Organizations should consult with qualified professionals to assess their specific situation and develop appropriate security policies.