Lawsuits blame ChatGPT for suicides and harmful delusions


Seven complaints, filed on Nov 6, claim the popular chatbot encouraged dangerous discussions and led to mental breakdowns. — Reuters

Four wrongful death lawsuits were filed against OpenAI on Nov 6, as well as cases from three people who say the company’s chatbot led to mental health breakdowns.

The cases, filed in California state courts, claim that ChatGPT, which is used by 800 million people, is a flawed product. One suit calls it “defective and inherently dangerous”. A complaint filed by the father of Amaurie Lacey says the 17-year-old from Georgia chatted with the bot about suicide for a month before his death in August. Joshua Enneking, 26, from Florida, asked ChatGPT “what it would take for its reviewers to report his suicide plan to police,” according to a complaint filed by his mother. Zane Shamblin, a 23-year-old from Texas, died by suicide in July after encouragement from ChatGPT, according to the complaint filed by his family.

Joe Ceccanti, a 48-year-old from Oregon, had used ChatGPT without problems for years, but he became convinced in April that it was sentient. His wife, Kate Fox, said in an interview in September that he had begun using ChatGPT compulsively and had acted erratically. He had a psychotic break in June, she said, and was hospitalized twice before dying by suicide in August.

“The doctors don’t know how to deal with it,” Fox said.

An OpenAI spokesperson said in a statement that the company was reviewing the filings, which were earlier reported by The Wall Street Journal and CNN. “This is an incredibly heartbreaking situation,” the statement said. “We train ChatGPT to recognise and respond to signs of mental or emotional distress, deescalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

Two other plaintiffs – Hannah Madden, 32, from North Carolina, and Jacob Irwin, 30, from Wisconsin – say ChatGPT made them have mental breakdowns that led to emergency psychiatric care. Over the course of three weeks in May, Allan Brooks, 48, a corporate recruiter from Ontario, Canada, who is also suing, came to believe that he had invented a mathematical formula with ChatGPT that could break the internet and power fantastical inventions. He emerged from that delusion but said he is now on short-term disability leave.

“Their product caused me harm, and others harm, and continues to do so,” said Brooks, whom The New York Times wrote about in August. “I’m emotionally traumatised.”

After the family of a California teenager filed a wrongful-death lawsuit against OpenAI in August, the company acknowledged that its safety guardrails could “degrade” when users have long conversations with the chatbot.

After reports this summer of people having troubling experiences linked to ChatGPT, including delusional episodes and suicides, the company added safeguards to its product for teens and users in distress. There are now parental controls for ChatGPT, for example, so that parents can get alerts if their children discuss suicide or self-harm.

OpenAI recently released an analysis of conversations that had taken place on its platform over a recent month that found that 0.07% of users might be experiencing “mental health emergencies related to psychosis or mania” per week, and that 0.15% were discussing suicide. The analysis was conducted on a statistical sample of conversations. But, scaled to all of OpenAI’s users, those percentages are equivalent to 500,000 people with signs of psychosis or mania, and more than 1 million potentially discussing suicidal intent.

The Tech Justice Law Project and the Social Media Victims Law Center filed the suits. Meetali Jain, who founded the Tech Justice Law Project, said the cases had all been filed on one day to show the variety of people who had troubling interactions with the chatbot, which is designed to answer questions and interact with people in a humanlike way. The people in the lawsuits were using ChatGPT-4o, previously the default model served to all users, which has since been replaced by a model that the company says is safer, but which some users have described as cold. – ©2025 The New York Times Company

This article originally appeared in The New York Times.

Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim’s (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to befrienders.org.my/centre-in-malaysia for a full list of numbers nationwide and operating hours, or email sam@befrienders.org.my.
Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show
Netflix’s $72 billion Warner Bros deal faces skepticism over YouTube rivalry claim
Pakistan to allow Binance to explore 'tokenisation' of up to $2 billion of assets
Analysis-Musk's Mars mission adds risk to red-hot SpaceX IPO
Analysis-Oracle-Broadcom one-two punch hits AI trade, but investor optimism persists

Others Also Read