TL;DR: On March 31, 2026, Anthropic accidentally shipped the entire source code of Claude Code to the public npm registry via a single misconfigure...
For further actions, you may consider blocking this person and/or reporting abuse
WOW: Frustration Detection via Regex , it is useless against me because many times talking hungarian with LLM.
lol
It was shocking π π
This has to be a bout of incompetence eh?
Honestly, this does point to engineering incompetence
the missing line in .npmignore angle is the most instructive part. one build config gap and 512k lines ship publicly. the same pattern shows up at the PR layer - teams assume something else is catching what the agent ships. the build pipeline, the CI checks, the review process. until nothing is.
In our latest cohort, we noticed that the biggest challenge isn't code leaks, but managing the rapid deployment of AI agents. Most teams stumble because they haven't integrated these agents into their existing workflows effectively. It's not just about having the code -it's about knowing how to use it to streamline operations and decision-making. - Ali Muwwakkil (ali-muwwakkil on LinkedIn)
And if it is even happening to such big players what is your guess on how many leaked maps or similar stuff is already out there?
Yeah , there will be actually if we just pay but close attention on some small but effective projects there may be such vulnerabilities happening. Especially in corporate environment where the node and othe dependencies are not updated regularly.
Exactly π―
Honestly the .npmignore thing is interesting but what caught my eye is the 250,000 failed API calls per day buried in the code. I use Claude Code daily and that number explains a lot. Turns out they also hide your token usage from subscription users even though it's sitting right there in local JSONL files "aiengineering.report/p/the-hidden.... The real story here isn't how the code got out, it's what was in it.
Yeah, and that kind of waste + lack of transparency leans more toward systemic issues than just a one-off mistake.
The three-layer memory architecture breakdown is the most valuable part of this for me. I've been building a similar tiered system for a large static site project β lightweight index that loads every session, topic-specific files pulled on demand, and raw logs that only get searched when you need something specific. Seeing that Anthropic arrived at essentially the same pattern independently is validating.
The .npmignore lesson is the real takeaway for every team though. We treat deployment configs as plumbing, but they're actually security boundaries. I've audited my own npm publish workflows twice since reading about this β found a test fixture directory that would have shipped if I hadn't checked.
Great breakdown of the timeline and the alternative theory section. Whether intentional or not, the engineering quality speaks for itself.
This is a wild story and a great reminder of how fragile our deployment pipelines can be. It's fascinating how a single missing line in .npmignore can expose so much. I've been working on a system of .mdc rules for Cursor specifically to prevent these kinds of 'human' errors that AI tends to replicate or overlook. By codifying these constraints (like mandatory ignore patterns or security checks) directly into the model's context, we can catch these slips before they even reach the terminal. Prevention is definitely better than detection when it comes to source code leaks!
Righttt!!!
The .npmignore lesson here is actually a great reminder for anyone shipping AI tools. One config file you forgot to check becomes your entire codebase on the internet. I have been building AI-powered digital products and the same principle applies: always audit what you are exposing before you publish. Great writeup.