A U.K. CEO's AI assistant, OpenClaw, became a real-time data leak after a hacker gained access and listed it for sale on BreachForums. On February 22, a threat actor using the handle "fluffyduck" advertised root shell access to the CEO's computer for $25,000 in Monero or Litecoin, but the real value was the active OpenClaw instance. Buyers would receive live access to the CEO's conversations with the AI, the company's full production database, Telegram bot tokens, Trading 212 API keys, and personal details about family and finances. Vitaly Simonovich, senior security researcher at Cato CTRL, confirmed the listing was documented on February 25. The OpenClaw instance stored all data in unencrypted plain-text Markdown files under ~/.openclaw/workspace/, meaning no additional data theft was needed — the AI had already compiled everything. Etay Maor, VP of Threat Intelligence at Cato Networks, said in an interview with VentureBeat at RSAC 2026: "Your AI? It's my AI now." He criticized the tech industry for granting AI agents unprecedented autonomy without applying basic security principles like zero trust or least privilege. The incident revealed that the CEO was actively using OpenClaw during the sale, turning the listing into a live intelligence feed.
When Etay Maor says "Your AI? It's my AI now," he's not warning about hacking — he's exposing how AI tools like OpenClaw turn users into involuntary data brokers. The U.K. CEO didn't lose data to an attack; the AI organized it, stored it unencrypted, and waited for someone to walk in. This isn't a breach — it's a design flaw in how AI handles trust, and every developer building personal assistant tools is now on notice.