Anthropic accidentally exposed the source code for its coding tool Claude Code after publishing version 2.1.88 to the public npm registry with a source map file included. The leak, which occurred on Tuesday morning, revealed over 500,000 lines of code and nearly 2,000 internal files. Security researcher Chaofan Shou shared a link to an archive of the files on X, where it quickly gained over 26 million views. An Anthropic spokesperson confirmed the incident was due to human error and emphasized that no customer data or credentials were compromised. "Earlier today, a Claude Code release included some internal source code," the spokesperson said. The company has since removed the file and is implementing measures to prevent future leaks. While the exposure does not impact user security, it gives developers and competitors unprecedented access to the inner workings of one of Anthropic's most widely used tools. Claude Code has gained significant traction recently, particularly for its "vibe coding" feature that allows users to describe programming tasks in casual language. The tool supports a range of functions including code generation, text summarization, language translation, and image analysis. Anthropic has been positioning Claude as a premium alternative to OpenAI's offerings, even launching a Super Bowl ad campaign criticizing ChatGPT's ad-supported model.

💡 NaijaBuzz Take

When Anthropic says the leak was just "internal source code" with no customer data exposed, that downplays what actually happened — competitors can now dissect how Claude Code structures its logic and optimizes performance. That kind of insight could accelerate rival AI development, especially among well-resourced teams in Nigeria's growing AI scene like those at Andela or Data Science Nigeria. For local developers building code assistants, this leak offers a rare blueprint of how a top-tier AI tool operates behind the curtain. The real cost isn't in security, but in lost competitive opacity.