The Claude Code Leak: Your npm Pipeline Is Next

April 14, 2026 · 14 min read

The Claude Code Leak: Your npm Pipeline Is Next

The Claude Code Leak: Your npm Pipeline Is Next

TL;DR - On 31 March 2026, Anthropic accidentally published cli.js.map to the npm registry, exposing the complete TypeScript source for Claude Code: 1,884 files, 26 hidden slash commands, 32 feature flags, 120+ secret environment variables, hardcoded dev API keys, and model codenames for unreleased versions. Boris Cherny at Anthropic called it "human error" in a manual deploy process. Anthropic filed 8,100+ DMCA takedowns - but the mirror had already hit 84,000 GitHub stars. What you need to do: audit your npm publish config, scrub sourcemaps, rotate anything hardcoded, and assume your CI/CD is two lines away from the same incident.

The Leak By The Numbers

MetricValue
TypeScript files exposed1,884
Hidden slash commands26
Build-time feature flags32
Secret environment variables120+
Unreleased model codenamesopus-4-7, sonnet-4-8
GitHub stars on the leak mirror84,000+
DMCA takedowns filed8,100+ (later scaled back)
Python fork stars in 24 hours111,000
Time from leak to supply-chain attackDays (typosquat packages detected)

I've been watching the Claude Code leak unfold for two weeks now, and I keep coming back to the same thought.

This wasn't a breach. Nobody got phished. Nobody popped an admin box. Nobody dropped a zero-day on Anthropic's infrastructure. One of the best-funded, most security-conscious AI companies on earth shipped their complete application source to npm because somebody on the deploy team pressed the wrong button.

If it can happen to Anthropic, it can happen to you. And if you run any kind of JavaScript or TypeScript project that publishes artefacts, your config is probably two lines away from the same incident.

Let me walk you through what actually leaked, why the damage was already done before the DMCA notices went out, and what the smart teams are changing in their pipelines this week.


What the Claude Code Leak Actually Exposed

On 31 March 2026, a cli.js.map file was published to the npm registry as part of the Claude Code package. Sourcemaps are meant for local debugging. They're the file that lets your browser's devtools show you original TypeScript when an error throws in minified JavaScript. In dev, they're essential. In production artefacts, they are the complete, reconstructable source code of your application sitting behind a one-character URL suffix.

Anthropic's cli.js.map contained 1,884 TypeScript files. The entire Claude Code CLI. Tool implementations, system prompts, session logic, tool approval pipelines, feature-flag wiring, and the test suite.

Security researcher Chaofan Shou flagged it on 31 March. The post hit 28 million views. By the time Anthropic responded, the code was mirrored, forked, and spread across GitHub.

What got exposed:

  • Hardcoded API keys for staging and development environments (three of them, with SDK prefixes)
  • Internal system prompts that could be dumped via a hidden --dump-system-prompt flag
  • Session tokens and OAuth refresh mechanisms
  • Model codenames for unreleased versions ("opus-4-7", "sonnet-4-8")
  • 22 secret Anthropic repository names listed in an "undercover mode" allowlist
  • Binary-level client attestation using a hardcoded salt
  • 26 hidden slash commands never documented publicly
  • 32 build-time feature flags, including some with alarming names

Boris Cherny at Anthropic confirmed it was "human error" - the deploy process had manual steps and no automation to catch sourcemap inclusion before publish.

One human, one manual step, one missed config flag. That's all it took.


The Flags That Should Scare You

Most of the leaked code is ordinary application logic. But some of it is not.

Three feature flags in the leaked source stood out to the research community:

  • DISABLE_COMMAND_INJECTION_CHECK - exactly what it sounds like
  • CLAUDE_CODE_ABLATION_BASELINE - disables a safety layer for internal testing
  • CLAUDE_CODE_UNDERCOVER - strips AI-contribution markers from generated code

Now, I am not suggesting Anthropic ships Claude Code with these flags enabled. They are almost certainly internal-only, used for research, A/B testing, or red-team scenarios. That's not the point.

The point is that these flags exist in the shipped binary, and their names are now public. A determined attacker can flip them. An overeager internal employee can flip them. A prompt-injection attack can trick the agent into flipping them.

If you ship DISABLE_SECURITY_CHECK flags in production artefacts, assume someone will read them and flip them. The only safe flag is the one that doesn't exist in shipped code.

The lesson for your own team: feature flags are threat intel. Anything your build system can toggle is something an adversary can discover and toggle. Either strip safety-relevant flags from production builds at compile time, or rename them so their presence leaks no information about what they do.


The Three Mistakes Every JavaScript Team Makes

I've consulted on more than a dozen Node.js and TypeScript pipelines in the past year. Three mistakes come up every single time. Anthropic hit all three.

Mistake #1: Shipping Sourcemaps to Production

The default in every modern bundler - webpack, Vite, esbuild, Rollup, Turbopack - is to generate sourcemaps. Most teams leave them on for local builds and forget to disable them for publish steps.

If your npm publish step doesn't explicitly exclude .map files, they go up with the rest of the tarball. Nobody notices until a researcher runs npm pack --dry-run on your package and sees the sourcemap in the file list.

Fixes that take five minutes:

  • Add *.map to your .npmignore or files field in package.json
  • Set generateSourceMaps: false in your production build config, or pipe through a scrub step
  • Run npm pack --dry-run as a pre-publish check in CI and grep for .map
  • If you legitimately need sourcemaps for error-reporting services (Sentry, Rollbar), upload them directly to the service and exclude them from the public artefact

Mistake #2: Hardcoded Secrets "Just for Dev"

The three SDK keys found in the Claude Code leak were staging or development keys. Not production. But their presence in a public artefact is still a textbook violation of every secrets-management playbook.

Dev keys get reused. Dev keys get promoted. Dev keys get committed years ago and forgotten about. "It's only staging" becomes "production's down, use the staging key temporarily" becomes "the staging key is now a prod dependency."

The only acceptable approach:

  • No hardcoded secrets in source. Ever. Not even test fixtures.
  • Secrets come from environment variables at runtime, loaded from a secrets manager
  • CI/CD publishing uses OIDC-based authentication (npm provenance, GitHub Actions OIDC)
  • Rotate any secret that has ever existed in source, even if you think it was only for dev
  • Run secret scanning (TruffleHog, gitleaks, Semgrep) against the published artefact, not just the repo

That last one is the one most teams miss. Your repo might be clean. Your tarball might not be. Scan what you ship.

Mistake #3: Manual Deploy Steps in a Critical Pipeline

Boris Cherny's phrase was "human error." Which is true but incomplete. The error was human because the pipeline required a human.

Every manual step in a deploy process is a place where your junior engineer, your senior engineer, or the person covering for someone on leave can make a mistake that ships to production. Anthropic's team is not stupid. They built Claude Code. They almost certainly have better engineers than most of us. And they still tripped.

The controls:

  • Eliminate manual steps in the publish pipeline. If a human has to type a command at a specific moment, the pipeline is broken.
  • Every artefact that goes to a public registry is produced by CI, not a laptop
  • Pre-publish hooks that diff the tarball against an allowlist of permitted file patterns
  • npm publish with --provenance so the published package has a verifiable chain back to the commit
  • Block direct publish from developer machines at the organisation level in npm

If you can't ship from a laptop, you can't make Anthropic's mistake.


The DMCA Problem

Anthropic's response was to file 8,100+ DMCA takedowns against GitHub repositories mirroring the leaked code. After public backlash and accusations of overreach, they scaled back to 1 repo and 96 forks.

This matters less than it sounds like.

The damage was done in hours, not days. The clean-room rewrite ("claw-code") crossed 100,000 stars. A Python fork hit 111,000 stars in 24 hours. The mirror had 84,000 stars before takedowns began. By the time the legal team responded, every competitor, every security researcher, and every opportunistic fork maintainer already had the code.

DMCA is not a security control. It is a cleanup tool for when a security control has already failed.

If you're relying on legal threats to contain a source-code leak, you are already past the failure point where security controls were supposed to work. Prevention is the only strategy that matters.

There's also a reputational dimension. Anthropic spent a decade building credibility in the AI safety community. Spending that capital sending 8,100 takedown notices - many of which targeted security researchers publishing analysis, not the leaked code itself - cost them more than the leak did. The research community has long memories.


The Supply Chain Aftermath

Within days of the leak, typosquat npm packages appeared exploiting the situation:

  • color-diff-napi
  • modifiers-napi

These were named to look like internal Anthropic dependencies referenced in the leaked source. They carried malware. Anyone who ran npm install against a dependency tree that pulled these in got compromised.

This is the part of the incident that will keep happening for months. Every leaked internal package name is a typosquat opportunity. Every leaked model codename is a social-engineering lure ("we're beta-testing opus-4-7, click here to sign up"). Every leaked feature-flag name is a phishing email template.

If your org uses Claude Code - or is thinking about it - this week:

  1. Pin the version. Don't auto-upgrade. Lock to a known-good tag.
  2. Audit your package-lock.json for any dependency names that look suspiciously close to Anthropic-internal ones from the leak.
  3. Enable npm's --ignore-scripts globally to prevent post-install scripts from typosquatted packages running automatically.
  4. Watch for "official" Anthropic emails referencing the leaked codenames. The leak is a gift to phishers.
  5. Subscribe to npm's advisory feed so you hear about malicious package takedowns as they happen.

The Controls Every Team Should Implement This Week

This is the checklist I'm sending every client I've worked with in the past 12 months. If you ship any artefact to a public registry - npm, PyPI, NuGet, Maven, crates.io, Docker Hub - go through it.

1. Sourcemap hygiene

  • Production builds set sourcemap: false (or equivalent for your bundler)
  • *.map explicitly excluded from your publish manifest
  • A CI step that fails the build if .map files are present in the publish tarball
  • If sourcemaps are required for error-reporting tools, upload them directly to the tool's private ingest endpoint, never the public artefact

2. Secrets scanning

  • Pre-commit hooks with secret detection (gitleaks, TruffleHog, Semgrep)
  • CI step that scans the built artefact for secrets, not just the repo
  • Org-wide GitHub secret scanning enabled with push protection
  • Quarterly rotation of shared dev and staging secrets, even if "unused"
  • No secrets in Dockerfiles, config files, or README samples - use placeholder values

3. Publish pipeline hardening

  • No manual steps in the publish flow - fully automated from tagged commit to registry
  • npm publish with --provenance flag enabled
  • OIDC-based authentication (GitHub Actions to npm, or equivalent) instead of long-lived tokens
  • Developer machines blocked from direct publish at the org level
  • Publish workflow scoped to a specific protected branch

4. Artefact auditing

  • Run npm pack --dry-run (or equivalent) and inspect the file list before every publish
  • Allowlist of permitted file patterns - block anything outside it
  • Publish workflow emits an artefact hash and file list as a build artefact you can audit later
  • SBOM generated on every build (CycloneDX or SPDX format)

5. Post-incident preparedness

  • Know who in your org has DMCA filing authority and how long it takes
  • Have a communications template ready for "we accidentally leaked X"
  • Incident response plan covers supply-chain events, not just network intrusions
  • Customer notification path tested (not just compliance officer to lawyer to silence)

What This Means for the Industry

I want to flag something that bothers me about this incident more than the technical details.

Anthropic is one of the most security-aware companies in AI. They have a safety team that publishes red-team research. They ship vulnerability disclosure programmes. Their public posture is "we think about this stuff more carefully than anyone."

And they shipped their source to npm because a human pressed a button.

That's not a criticism - it's a humbling data point. If Anthropic's pipeline is vulnerable to human error at the publish step, yours almost certainly is. The difference between you and them is that your leak won't make the front page of Hacker News. Your leak will be quietly mirrored, quietly exploited, and you'll only find out when a researcher emails you or a breach notification lands from one of your customers.

The broader industry lesson: if your security depends on "don't make mistakes", you don't have security, you have luck. Every control we've covered above - automated publish, provenance, artefact scanning, sourcemap scrubbing - exists specifically because humans are going to make mistakes. Pipelines that assume human perfection always fail, eventually, publicly, and at the worst possible moment.

The good news is that none of these controls are expensive. Most of them take an afternoon to implement and are free for the tool tier most SMBs operate at. The cost of doing nothing is much higher than the cost of doing something.


Key Takeaways

  • The leak was a single sourcemap published to npm by a manual deploy step - not a breach, not a hack
  • 1,884 TypeScript files exposed, including hardcoded staging keys, 26 hidden slash commands, 32 feature flags, and model codenames for unreleased versions
  • Three safety-bypass flags (DISABLE_COMMAND_INJECTION_CHECK, CLAUDE_CODE_ABLATION_BASELINE, CLAUDE_CODE_UNDERCOVER) are now public knowledge
  • DMCA takedowns failed to contain the leak - by the time 8,100 notices went out, the code had 84,000 stars on mirror repos
  • Typosquat npm packages exploiting the leak appeared within days - supply chain aftermath continues
  • The fix for your team is sourcemap scrubbing, secrets scanning of the artefact, automated publish pipelines with provenance, and no manual deploy steps
  • DMCA is not a security control - prevention is the only strategy that matters for source-code leaks

Frequently Asked Questions

What is the Claude Code leak? On 31 March 2026, Anthropic published a cli.js.map sourcemap file to the npm registry as part of the Claude Code package. The sourcemap reconstructed the full TypeScript source of the application - 1,884 files - and was publicly downloadable before it was noticed.

How did the Claude Code source leak happen? Per Anthropic's Boris Cherny, it was "human error" in a manual deploy process that lacked automation to catch sourcemap inclusion before publishing. No breach, no compromise - a missed configuration in the publish step.

What was exposed in the Claude Code leak? Complete TypeScript source for 1,884 files, 26 hidden slash commands, 32 build-time feature flags (including safety-bypass flags), 120+ secret environment variables, hardcoded staging/development API keys, internal system prompts, session token handling, and codenames for unreleased models (opus-4-7, sonnet-4-8).

Is Claude Code still safe to use? The leaked source does not automatically compromise Claude Code installations. However, IT teams should pin their version, monitor for typosquat packages exploiting the leak, and watch for phishing referencing leaked internal codenames. Anthropic has not confirmed whether they rotated the exposed hardcoded keys.

How do I prevent this in my own npm pipeline? Five controls: (1) strip sourcemaps from production builds, (2) scan the published artefact for secrets, not just the repo, (3) automate the publish pipeline with no manual steps, (4) use npm provenance and OIDC-based auth, (5) block direct publishing from developer machines at the org level.

Are the DMCA takedowns going to stop the leak spreading? No. By the time Anthropic filed 8,100+ DMCA notices, the code had already been mirrored and forked extensively. The leaked repos had 84,000+ stars, a Python fork hit 111,000 stars in 24 hours, and clean-room rewrites crossed 100,000 stars. DMCA is cleanup, not prevention.

What should I do if my team uses Claude Code? Pin the version instead of auto-upgrading, audit your package-lock.json for typosquats of Anthropic-internal package names, enable --ignore-scripts globally, verify you're installing from the official package, and treat any email referencing leaked model codenames as a potential phishing attempt.


My Take

Every IT leader I've spoken to about this leak asks the same question: "could this happen to us?" The honest answer is: almost certainly yes, if you haven't specifically invested in pipeline hygiene.

I'm not writing this to dunk on Anthropic. Their overall security posture is genuinely among the best in the AI industry, and this incident is going to force the entire sector to get serious about deploy pipeline hygiene in a way that quiet, responsible improvements never would. Every CTO reading the news this month is now asking their infrastructure team uncomfortable questions, and that is a net good.

But there is a lesson here about where the industry's attention goes. Most security budgets get spent on the things that make for exciting conference talks: zero-days, APTs, nation-state threat models, AI-powered attack detection. The Claude Code leak happened because somebody skipped one automation step in a publish process. The fix is boring. The controls are boring. The CI/CD hygiene is boring. And that's exactly why nobody works on it.

The most dangerous security failures are not the ones that require genius adversaries. They are the ones that require a tired engineer at the end of a long day, a manual step in a critical pipeline, and nothing between that person and a public registry.

If the Claude Code leak convinces your team to spend a week on deploy pipeline hardening, it will have paid for itself many times over. If it doesn't, your own leak is already scheduled - you just don't know the date yet.


Ready to stop winging security? Join 158+ Australians getting one 5-minute security briefing every Friday - plus grab the Personal Security Quick-Start Guide that's helping IT pros, families, and small business owners get the basics right.

Get The Free Guide →


Mathew Clark Founder, SecureInSeconds Currently: Running npm pack --dry-run on every project I've touched in the past year and quietly sweating


Further Reading:

Share:

You might also like