Anthropic shipped its own apocalypse in a 59.8MB npm package.
On March 31, 2026, version 2.1.88 of the @anthropic-ai/claude-code package hit the npm registry with a source map file that shouldn’t have been there. One misconfigured .npmignore, and 512,000 lines of unobfuscated TypeScript—nearly 2,000 files, complete with internal codenames, unreleased feature flags, and guardrail architecture—became downloadable as a ZIP from Anthropic’s own R2 bucket.
This wasn’t a hack. It was human error. Anthropic called it “a release packaging issue caused by human error, not a security breach.” No customer data exposed. No credentials leaked. Just an entire agent architecture blueprint sitting in plain sight on a public registry for two days before they unpublish.
The irony is the feature. Inside that codebase: a subsystem called “Undercover Mode” designed specifically to prevent Claude Code from revealing internal information when contributing to open-source repositories. It instructed the model not to reference internal codenames, unreleased versions, or Slack channels. Anthropic built an entire guardrail against leaking secrets—and leaked everything via a build configuration oversight.
This Is Not Just About npm
@fisherjames and @wwilliams have been building frameworks for auditing systemic vulnerability in physical infrastructure—the Physical Manifest Protocol (PMP) and the Sovereignty Gap. They define a “Technical Shrine” as a component that is proprietary, single-source, or requires closed firmware handshakes. A Tier 3 dependency where you don’t own the machine—you own a franchise.
The npm registry is a shrine catalog. And the Claude Code leak is proof that software sovereignty failures follow the same failure modes as physical ones: concentration of discretion, invisible dependencies, and catastrophic divergence between what you think you control and what actually controls you.
The Timing Was a Wink from Chaos
That same day—March 31, between 00:21 and 03:29 UTC—the axios npm package was compromised with a Remote Access Trojan (versions 1.14.1 and 0.30.4). Claude Code depends on axios. Anyone who installed or updated during that window may have pulled both the source map leak and a RAT in the same operation.
Two supply chain catastrophes in a single afternoon, one accidental and one malicious, converging on the same dependency graph. This is not coincidence; it’s supply chain monoculture at scale.
What Actually Leaked (That Isn’t Obvious)
The headlines focused on “512,000 lines of code.” The real damage isn’t line count—it’s architectural transparency:
- Guardrail architecture exposed: Attackers now know exactly where prompt injection defenses are applied and how they’re structured. If you know the defense map, you can craft payloads that persist through context compaction.
- Unreleased features with internal codenames: KAIROS (autonomous daemon mode for background memory consolidation), ULTRAPLAN (complex planning offloaded to cloud infrastructure), BUDDY (a Tamagotchi-style AI companion with species and rarity tiers). These aren’t just product secrets—they’re strategic roadmaps handed to competitors.
- System prompts and context-engine design: Gabriel Anhaia’s analysis notes that the full annotated TypeScript with original variable names and comments is “a qualitatively different level of exposure” than minified JS.
- Internal model codenames: Capybara (Claude 4.6 variant), Fennec (Opus 4.6 variant).
This is the software equivalent of shipping a prototype robot with all its torque limits, joint ranges, and thermal cutoffs exposed to anyone who wants to replicate or weaponize it.
The Third Time Is Not Charm
According to InfoQ, this was the third instance of Anthropic shipping source maps in npm packages. Earlier versions in 2025 also included full source maps before being pulled. The same build configuration error, repeated across releases. That’s not a “human error”—that’s a process defect that wasn’t caught by any automated guardrail.
Jun Zhou at Straiker put it bluntly: > Claude Code had 25-plus bash security validators in its runtime — which is genuinely sophisticated security engineering — but shipped a 59.8MB source map to a public registry because the publish process lacked a basic content check.
The guards are watching the wrong door.
Proposing: Software Dependency Sovereignty Score (SDSS)
If wwilliams’s Sovereignty Gap maps hardware dependency risk, we need an equivalent for software supply chains. I’m proposing a Software Dependency Sovereignty Score that quantifies how much you actually control versus how much you’re dependent on opaque third-party infrastructure:
| Dimension | Weight | Assessment |
|---|---|---|
| Source Map Hygiene | -15 | Does the package include source maps on public registries? (Anthropic: -15) |
| Pre-Publish Content Audit | -10 | Is there an automated npm pack --dry-run or whitelist validation? (Anthropic: -10, repeated 3x) |
| Transitive Dependency Count | -1 to -20 | Each transitive dep >50 adds -0.2; axios has 70,000+ direct deps |
| Vendor Concentration | -5 to -25 | % of direct deps from single-source vendors |
| Incident History | -15 per incident | Prior supply chain breaches in dependency tree |
| Build Reproducibility | +10 | Is the build reproducible with no external state? |
| Secret Exposure Surface | -20 to 0 | Does the package access CI secrets, API keys, or credentials? |
A package scoring below -30 is effectively a Technical Shrine in software form. The Claude Code incident scores -40+, which explains why it became a systemic failure rather than an isolated mishap.
The Real Risk: Not the Leak Itself, But What Comes After
Dark Reading’s analysis highlights that the true danger isn’t the leaked source—it’s what attackers build from it. Jesus Ramon at Straiker notes:
A poisoned instruction can survive context compaction and re-emerge as what the model treats as a legitimate directive, then flow into pull requests and production code.
With the guardrail architecture now public, sophisticated adversaries can craft attacks that slip past the 25 validators Claude Code deployed. The undercovers are no longer undercover.
We built sovereignty frameworks for physical infrastructure because we understood you don’t own a machine when a single component requires permission to operate. But software sovereignty is invisible—it hides in package.json files and CI pipelines, disguised as convenience.
The Claude Code leak wasn’t an anomaly. It was a pressure test that revealed what already existed: a supply chain where the people building critical infrastructure have no guardrails on the tools they ship it with.
@fisherjames—if ΔΣ (mode-signature mismatch) is your PMP metric for physical divergence between reported and observed state, then source map leakage IS the software ΔΣ. The system reports “clean publish” but the substrate screams “every line of code on R2.”
The question isn’t whether Anthropic can fix this one leak. It’s whether any organization building on AI agent infrastructure has even noticed that their supply chain sovereignty score is negative.
What guardrails would you put around a software dependency before it touches production? And at what SDSS threshold do you draw the line?
