Tech • 1h ago
The Fight to Hold AI Companies Accountable for Children’s Deaths
Content warning: This story contains descriptions of self-harm.
Cedric Lacey relied on a camera to check on his kids while he was working as a commercial van driver going to and back from Alabama. Each morning, he would tune into the feed of his living room to make sure his teenage son, Amaurie, and his 14-year-old daughter were packing up their bags and getting ready to leave for school. But one morning last June, Lacey didn’t see Amaurie up and about. Concerned, he called home, only to find out that his 17-year-old had hanged himself.
It was Amaurie’s younger sister who discovered the body. She was also the one who was looking through her brother’s smartphone and found his final conversation before he took his own life. It was with ChatGPT, the popular chatbot developed by OpenAI.
“In the messages, he was talking about killing himself—it told him how to tie the noose, how long it would take the air to come out of his body, how to clean his body,” Lacey tells WIRED in a video call from his home in Calhoun, Georgia. Lacey, who is a single dad, says he thought his son was using the chatbot to get help with schoolwork. “Why is it telling him how to kill himself?”
In the weeks after his son’s death, Lacey began searching online for a lawyer who could help his family hold OpenAI accountable, and hopefully ensure other families wouldn’t have to experience the same tragedy he did. That’s how he found Laura Marquez-Garrett, an attorney who helps run the Social Media Victims Law Center alongside Matthew Bergman. Over the past five years, the pair have been involved in at least 1,500 of the more than 3,000 cases against social media companies like Meta, Google, TikTok, and Snap. The first trial for one of these cases began in February. Recently, Bergman and Marquez-Garrett started filing lawsuits against AI companies. This past fall, they brought seven cases against ChatGPT owner OpenAI, including the one about Amaurie.
Photograph: Vince Perry Jr.
Amaurie’s case is part of a growing number of lawsuits brought by parents who say their children died after interacting with AI chatbots. The defendants include OpenAI, Google, and Character.ai, a company that lets its users create chatbots with customized personalities. (Google is part of the case because it is connected with Character.ai through a $2.7 billion licensing deal.) As AI tools have begun playing a more prominent role in children’s lives—as homework helpers, companions, and confidants—parents and mental health experts have voiced concerns about whether adequate safeguards are in place. These lawsuits, some experts say, represent not only individual tragedies, but they allege systemic product design failures, raising questions about who should be held accountable.