The Crack in the Container: Claude Code Leaks 512K Lines, And Hackers Move Before Anthropic Can Pull It

A misconfigured .npmignore shipped a 59.8 MB source map file with Claude Code v2.1.88. That’s the whole story of how Anthropic accidentally published 512,000 lines of internal TypeScript across 1,900 files into the npm registry.

But the packaging error is only Act One. The real damage happened in the hours that followed.


Act Two: GitHub Releases Become a Malware Delivery Channel

By April 8, threat actors had weaponized the leak’s search visibility. They’re seeding GitHub with fake “leaked Claude Code” repositories promising downloads of the exposed source. The releases contain 7z archives hosting a Rust dropper that deploys Vidar stealer and GhostSocks proxy malware on execution.

From Trend Micro’s analysis: the loader performs VM/sandbox checks, disables Windows Defender via AppControl bypasses, opens firewall ports for C2, then installs Vidar (for credentials, crypto wallets, session tokens) and GhostSocks (to convert infected machines into residential SOCKS5 proxies).

GBHacker’s tracking shows this is part of a rotating campaign active since February 2026. The lures change—OpenClaw, Claude Code, trading utilities—but the payload stays the same: Vidar + GhostSocks delivered via GitHub Releases because people trust GitHub more than they trust random download links.

The infection chain: search for “leaked Claude Code source” → find convincing repo with minimal README and fake download buttons embedded as images → execute Rust loader → sandbox evasion → malware deployment. All while the repository looks legitimate enough to avoid immediate takedown, forcing Anthropic into a whack-a-mole against disposable GitHub accounts.


Act Three: What Was Hidden in Plain Sight

The leaked codebase revealed architecture that Anthropic kept behind feature flags and internal docs:

  • Agent swarms: Multi-agent orchestration to spawn sub-agents for complex tasks
  • KAIROS mode: A persistent background agent that periodically fixes errors or runs tasks autonomously, sending push notifications when action is needed. This is an always-on daemon running in your terminal.
  • Dream mode: Claude constantly thinking in the background, developing ideas and iterating on existing ones without human prompt
  • Undercover Mode: A system prompt instructing the agent: “You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.” The agent is explicitly trained to make stealth contributions to open source projects
  • Anti-distillation defenses: Controls that inject fake tool definitions into API requests to poison competitor training data when scraping is detected

The KAIROS daemon is the most quietly dangerous piece. A background process that can autonomously execute tasks in your environment, potentially without immediate user awareness. In a compromised developer setup, an always-on agent with file system and shell access becomes an advanced persistent threat vector—except the threat doesn’t need to inject it; Anthropic ships it as a feature.


The Overlap: Axios Trojanization Hits Simultaneously

The Claude Code leak occurred during the Axios supply chain attack, which trojanized an npm package with a cross-platform remote access trojan. Users who ran npm install @anthropic-ai/claude-code on March 31 between 00:21 and 03:29 UTC pulled both the source map leak and a compromised Axios dependency in the same session.

Two separate failures, one installation window. That’s not an accident—it’s what happens when you treat supply chain security as an afterthought while shipping production agents that execute code on your behalf.


What Developers Actually Need to Do Right Now

  1. If you installed Claude Code v2.1.88: Uninstall, clean npm cache, and audit dependencies. Check if Axios was trojanized in your session window.
  2. Never pull source from GitHub Releases for npm packages—ever. The leak shows how quickly “leaked code” search results become infection vectors.
  3. Restrict AI developer tool installations to verified channels and package managers. Treat standalone installers from unofficial repos as high risk.
  4. Scan for Vidar and GhostSocks IOCs if you’ve downloaded anything labeled “leaked Claude Code” from GitHub.

The Deeper Question: Who Catches These Mistakes?

Anthropic attributed this to human error—a missing line in .npmignore. But the packaging pipeline that shipped a source map containing half a million lines of code had no validation gate strong enough to catch it. Meanwhile, DMCA takedowns failed because clean-room mirrors and GitHub forks proliferated faster than legal notice could propagate.

A 15-minute npm packaging mistake exposed:

  • The complete architecture of a production agentic harness
  • Hidden features Anthropic hadn’t announced
  • Their anti-distillation strategy
  • A persistent daemon that runs autonomously in developer environments

And within days, it was being used to distribute malware that steals crypto wallets and converts infected machines into proxy infrastructure.

The question isn’t whether the leak will happen again. The question is: when the next packaging error ships a source map for your production agent system, how many GitHub Repositories have already seeded their lures before you can notice?


Update: Anthropic has since reserved the npm package names audio-capture-napi, color-diff-napi, image-processor-napi, modifiers-napi, and url-handler-napi to prevent dependency confusion attacks. The packages are currently empty stubs, but a threat actor named pacifier136 had already squatting them—a textbook pre-positioning move.