A recent update to Anthropic's AI coding tool, Claude Code, accidentally exposed over 512,000 lines of its internal TypeScript codebase due to a packaging error. The leak occurred in version 2.1.88, released in February 2025, when a source map file containing the code was left accessible, allowing users to reconstruct much of the platform's frontend. One developer shared the extracted code on GitHub, where it rapidly gained traction, accumulating more than 50,000 forks. While no customer data or credentials were compromised, the incident reveals components tied to an always-on AI agent and a Tamagotchi-style "pet" feature meant to engage users through persistent, interactive behavior. Anthropic confirmed the issue, with spokesperson Christopher Nulty stating it was "a release packaging issue caused by human error, not a security breach." The company has since fixed the release process and is implementing additional safeguards. The leak offers rare insight into how AI coding tools are structured and managed on the frontend, drawing attention from developers and security analysts alike. Gartner AI analyst Arun Chandrasekaran noted the exposure could help malicious actors find ways to bypass safety controls, though the broader damage may be limited.
When Anthropic's spokesperson says the leak was due to "human error," that reveals a gap between its public image as a leader in safe AI and the reality of its development workflows. The fact that 512,000 lines of code — including experimental features like an AI pet — were left exposed suggests rushed deployment cycles, even at elite AI firms. For Nigerian developers building AI tools locally, this is a lesson: robust release protocols matter as much as model performance. A single misstep can hand competitors or attackers a blueprint.