A grandmother was wrongly imprisoned for over five months after a facial recognition platform, Clearview AI, mistakenly identified her as a suspect in a bank fraud case. This incident highlights the potential risks and flaws in the use of AI technology in law enforcement. Angela Lipps, a 50-year-old woman from Tennessee, was arrested and spent 108 days in jail before being extradited to Fargo, North Dakota, where the alleged crime took place. However, it was later discovered that Lipps had never been to North Dakota and had been wrongly identified by Clearview AI.

The case against Lipps was based on a facial recognition match between her image and a fake military ID used by a bank thief in Fargo. Clearview AI's technology had scraped photos from social media platforms and other online sources to create a massive facial-scan database, which was then used to train its machine learning algorithms. The company has faced criticism from tech giants like Facebook, YouTube, Twitter, and Venmo for mass photo scraping, but Clearview AI claimed it had a "First Amendment right" to the data.

The incident raises concerns about the accuracy and reliability of AI-powered facial recognition technology in law enforcement. Clearview AI has agreed to stop selling access to its tool to private businesses, but it continues to work with law enforcement agencies. The Fargo police have admitted to making mistakes in the investigation, but no apology has been issued to Lipps for her ordeal.

💡 NaijaBuzz Take

Clearview AI's reckless use of facial recognition technology has led to the wrongful imprisonment of an innocent woman. This incident should serve as a warning to law enforcement agencies and tech companies about the potential risks and consequences of relying on flawed AI technology. As the use of AI in law enforcement continues to grow, it is essential to prioritize accuracy, transparency, and accountability to prevent similar incidents in the future.