Meta's smart glasses are under scrutiny in Kenya after regulators launched an investigation into whether the devices improperly collect and expose sensitive user data. The Kenyan Data Protection Office is probing how the Ray-Ban Meta glasses gather, process, and share personal information, including footage from private moments. The glasses, which can record video, take photos, and respond to voice commands, were marketed as privacy-focused, but recent findings suggest they may send captured content to human reviewers, some based in Kenya, to train AI systems. This has raised concerns about consent and surveillance, especially since sensitive details like financial information and intimate moments could end up in review pipelines.
Regulators in the US and UK are also examining similar issues, signaling a broader global debate over AI-powered wearables and data protection. For Kenya, the probe highlights the challenge of balancing innovation with safeguarding citizens' privacy in an AI-driven world. The controversy comes as the country positions itself at the center of discussions about tech governance in emerging markets.
When Meta says its Ray-Ban glasses are privacy-conscious, that claim collapses under scrutiny. The fact that human reviewers in Kenya are handling private footage—including financial and intimate details—means the company's "privacy-first" marketing is either dishonest or dangerously naive. For Nigerian developers building AI tools, this should serve as a warning: if Meta can't get wearables right, what chance do smaller players have when regulators come knocking? The lesson isn't about technology but trust—once broken, it's nearly impossible to rebuild.