A 59.8 MB source map file for Anthropic's AI product Claude Code was accidentally published to the public npm registry in version 2.1.88 of the @anthropic-ai/claude-code package. The file, meant for internal debugging, allowed developers to reconstruct the full 512,000-line TypeScript codebase. By 4:23 am ET, Chaofan Shou, an intern at Solayer Labs, shared the discovery on X, including a link to a hosted archive. The code quickly spread across GitHub, drawing analysis from thousands of developers worldwide. Anthropic confirmed the leak, acknowledging that the exposure could compromise proprietary methods behind one of its most valuable products. Claude Code is a core driver of the company's growth, with annualized recurring revenue of $2.5 billion—more than double its figure at the start of 2026. Eighty percent of that revenue comes from enterprise clients, making the leak a significant setback. The incident occurred amid rising competition in the AI agent space, where rivals including Cursor could now study the architecture of a market-leading tool. While source maps are typically excluded from public releases, their inclusion here gave outsiders an unusually clear view of how Claude Code operates under the hood.

💡 NaijaBuzz Take

When Anthropic says the file was meant for internal use only, that means its most valuable AI product just had its DNA exposed. For a company pulling in $2.5 billion a year from enterprise clients trusting its tech as proprietary, this leak undermines confidence in its competitive edge. The fact that a single npm package update could release such critical IP suggests a breakdown not just in process, but in how tightly guarded these systems really are. In an era where AI differentiation hinges on secrecy and speed, this error hands rivals a rare gift: a working blueprint from the front lines of the agent revolution.