Mercor, an AI recruiting startup valued at $10 billion after a $350 million Series C round led by Felicis Ventures in October 2025, has confirmed it was impacted by a cyberattack linked to a compromise of the open-source project LiteLLM. The breach, tied to hacking group TeamPCP, affected thousands of companies using the widely deployed LiteLLM library, which sees millions of downloads daily, according to security firm Snyk. Mercor spokesperson Heidi Hagberg said the company moved quickly to contain the incident and is working with third-party forensics experts to investigate. While Lapsus$, another hacking group, claimed responsibility for accessing Mercor's data and shared samples including Slack messages, ticketing data, and videos of AI-contractor interactions, it is unclear how the data was obtained or whether customer or contractor information was exposed. Hagberg declined to confirm if the Lapsus$ claim was connected to the LiteLLM breach or whether any data was misused. The malicious code in LiteLLM was removed within hours of discovery, but the incident has raised concerns about vulnerabilities in open-source supply chains. In response, LiteLLM shifted its compliance certification from Delve to Vanta. Mercor, founded in 2023, partners with AI firms like OpenAI and Anthropic to train models using domain experts from countries including India and processes over $2 million in daily payouts.

💡 NaijaBuzz Take

When Mercor says it was one of "thousands" hit via a single open-source tool, that means even elite AI startups are only as strong as their weakest code dependency. The fact that a library like LiteLLM, downloaded millions of times daily, could carry malicious code—even briefly—exposes how fragile the global AI development pipeline really is. Nigerian startups building with open-source AI tools, from Paystack to emerging AI labs, should now treat third-party code as a critical attack surface, not just a convenience. Trust in open source just took a costly hit—and verification can no longer be optional.