What happened — the facts in three minutes
On March 31, 2026, researcher Chaofan Shou discovered that version 2.1.88 of the @anthropic-ai/claude-code npm package included a 59.8 MB source map file that should never have been there.
Source maps are debug files that link compiled code back to the original source — extremely useful internally, but not intended for public distribution. In this case, the file pointed directly to a ZIP archive hosted on Anthropic's Cloudflare R2 storage, downloadable by anyone.
The error was technical and mundane: with the Bun bundler (which Claude Code uses), source maps are generated automatically unless explicitly disabled. Someone had forgotten to add *.map to the .npmignore file.
Within hours, the code had been archived on GitHub, forked more than 41,500 times, and analysed by thousands of developers worldwide.
What did NOT happen — and why that matters
Before diving into the technical revelations, it is important to be clear about what this incident was not.
No customer data was exposed. Claude Code is a CLI tool for developers — it does not handle end-user data from companies using Claude via API or Enterprise.
No credentials or API keys were compromised. The exposed source code concerns the application logic of the tool, not access secrets or production configurations.
This was not an intentional breach. Anthropic confirmed immediately: "This was a release packaging issue caused by human error, not a security breach." No external actor compromised any system — it was an internal error in the release process.
For companies using Claude via API, Claude.ai or Claude Enterprise: nothing changes from a data security perspective.
The architecture the community found
When thousands of developers analyse 512,000 lines of code written by one of the most serious AI labs in the world, details rarely leak out.
Claude Code runs on Bun, not Node.js — a modern technical choice that prioritises performance and execution speed. The terminal interface is built with React and Ink, an elegant approach for complex CLI applications.
The architecture is modular and well-structured: around 40 built-in tools, each permission-gated, and a 46,000-line query engine handling all model API calls, streaming, caching and response orchestration.
Anyone expecting improvised or chaotic code was pleasantly disappointed. Community analysis confirmed a professional codebase with clear separation of responsibilities and careful permission management.
Want to discuss with an expert?
30 minutes to discuss your specific case.
KAIROS and the unreleased features: the future of Claude Code
The most discussed discovery is KAIROS — from the Ancient Greek for "the right time" — a feature mentioned more than 150 times in the code that has not yet been publicly released.
KAIROS represents a paradigm shift: Claude Code as an always-on daemon that monitors the development context in the background and intervenes autonomously at the right moment — not only when the user explicitly invokes it.
Other features discovered but not yet public: ULTRAPLAN — extended remote planning mode with up to 30 minutes of autonomous reasoning. Coordinator mode — orchestration of multiple agents in parallel on complex tasks. Buddy — a persistent AI companion integrated into the development workflow.
The codebase contains around 44 feature flags — fully built features that are disabled in the public release. Anthropic's product roadmap is significantly more advanced than what is visible externally.
The irony: Undercover Mode exposed
Perhaps the most ironic detail of the whole affair: the code revealed a feature called Undercover Mode.
This is a system designed to prevent Claude Code from accidentally revealing internal model names, project codenames and references to internal Slack channels in public commits. The system prompt injected during Undercover Mode explicitly instructs the model not to mention names like "Capybara" (the internal codename for a Claude variant) or "Fennec" (Opus 4.6).
A system built to maintain secrecy — exposed by a .map file forgotten in an npm package. The community did not miss the irony.
What this means for companies evaluating Claude
For organisations considering Claude, this episode offers three concrete takeaways.
First: Anthropic builds serious software. The code analysed by the community revealed no engineering shortcuts or shortcuts — it confirmed the technical quality of a company that invests heavily in product robustness.
Second: even the best companies make operational mistakes. What matters is the response — Anthropic communicated transparently, corrected quickly, and announced preventive measures. That is exactly the behaviour expected from a reliable enterprise vendor.
Third: the Claude Code roadmap is ambitious. The features discovered — KAIROS above all — suggest that AI developer tools will make a significant leap towards autonomy in the coming months. Organisations planning Claude adoption would do well to think with a medium-term perspective.
If you want to understand how to integrate Claude into your organisation in light of these developments, the Maverick AI team can guide you with an independent assessment.