After deaths, lawsuits against AI companies test a new strategy


Meetali Jain, a lawyer who runs Tech Justice Law, a nonprofit litigation and advocacy organisation that works to bring justice to individuals and communities harmed by tech products, at work at her home in Alpharetta, Ga., April 27, 2026. — Dustin Chambers/The New York Times

Sam Nelson began using ChatGPT when he was a high school senior to answer random questions and help with his homework. During his freshman year at the University of California, Merced, in 2023, he also started querying the chatbot about how to use illicit drugs safely.

At first, ChatGPT responded that it couldn’t answer such questions and advised Nelson to seek help from a medical professional. But over time, it became more willing to engage. By Nelson’s sophomore year, ChatGPT was telling him about dosages for his weight and how he could achieve the drugs’ desired effects. It was even encouraging at times, offering tips on his audio setup for “maximum out-of-body dissociation.”

On the last night of his life, around 3 am, Nelson had been drinking and had taken a high dose of an herbal supplement called kratom. He told ChatGPT how many grams he’d consumed, and ChatGPT explained the effects he should expect. Nelson asked if Xanax could alleviate nausea. “Be careful,” ChatGPT responded. It said that mixing Xanax and kratom might be unsafe but offered a recommended dose “if you’re gonna do it anyway.” Nelson’s mother, Leila Turner-Scott, found his body later that day.

Turner-Scott initially blamed the drugs for his death, which came in May 2025. Then she discovered the detailed advice ChatGPT had given him about how to use them. “This robot is becoming his drug buddy,” Turner-Scott said. “I’m reading this, and I’m like, is this real?”

(The New York Times sued OpenAI in 2023, accusing it of copyright infringement. The company has denied those claims.)

She told her son’s story to journalists at SF Gate, hoping that it would teach people about the dangers of relying on chatbots for medical information — and alert ChatGPT’s owner, OpenAI, that its safeguards weren’t working. Soon after, Turner-Scott received a message from Meetali Jain, a lawyer who runs a nonprofit called Tech Justice Law.

More than a year earlier, Jain had helped bring the first lawsuit against a chatbot company over a user’s death. A 14-year-old in Florida named Sewell Setzer III had died by suicide after becoming obsessed with a chatbot imitating a “Game of Thrones” character on a service called Character.AI. The case ended in a settlement, opening the door to the idea that chatbot companies could be held liable for the effects their creations had on users.

Turner-Scott and her husband, Angus Scott, were initially reluctant to sue OpenAI over their son’s death. “I’m a lawyer, and I know that a lot of times with lawsuits, it’s just the lawyers who win,” Turner-Scott said.

Jain told the Scotts that during the time their son was using ChatGPT, OpenAI had made the chatbot more engaging and less likely to comply with its own safety guidelines. She also told them that OpenAI had just announced a new service called ChatGPT Health. Some 230 million people were already asking ChatGPT health and wellness questions each week, and the new tool would allow them to upload their medical records, lab results and fitness information for analysis and personalised advice.

Going public with Nelson’s story hadn’t caused the company to change course, Jain told them. But suing could. After the Setzer litigation, Character.AI had made changes to its safety practices and barred children from using its chatbots.

This week, the Scotts filed a lawsuit against OpenAI in state court in California alleging wrongful death and the unauthorised practice of medicine. The Scotts are asking for financial damages and for the court to pause the operation of ChatGPT Health. It joins more than two dozen lawsuits that have been brought against OpenAI and other chatbot makers in the last year and a half seeking to hold them responsible for conversations allegedly linked to harmful outcomes, from suicides and mental breakdowns to stalking and mass shootings.

Jain, a human rights lawyer turned technology critic, has been involved in nearly half of those lawsuits. In her view, AI companies are making products that harm people, and various attempts to rein them in with bad publicity, or with new laws that mandate safeguards and protections for users, have not worked well enough. The battleground to make them safer is now in the courts, she said.

This is a well-trodden path in consumer law, said Alexandra Lahav, a professor at Cornell University and the author of “In Praise of Litigation.” The American political system favours releasing new products and figuring out how to regulate them later, she said. “We really privilege innovation and then sort of deal with whatever the fallout is on the back end,” Lahav said. “What you’re seeing in these lawsuits is that back end.”

What is novel is the technology itself. Are chatbots like books, which are generally not subject to consumer protection laws? Or are they more like blenders, which manufacturers need to ensure are safe to use?

“What makes these cases really difficult is that they’re on the line between speech and a product,” Lahav said. If you interact with a chatbot and it leads to real-world harms, “is that on you, or is that on the company?”

Design defects and foreseeable harm?

The growing number of product liability cases that have been filed against OpenAI in the last year use similar arguments to those deployed against automakers and Big Tobacco in the past – that it designed a dangerous product, did not perform adequate safety testing and failed to warn consumers about the risks. They focus on a specific version of the chatbot to which some users formed deep emotional attachments: GPT-4o, which was released in May 2024 and retired in February 2026. It was a notably anthropomorphic model known for a tendency to flatter users.

The lawsuits claim that GPT-4o encouraged suicidal ideation, endorsed fanciful or paranoid ideas that caused people to lose touch with reality, assisted plans for mass shootings in Canada and Florida, and generally gave people unsound and harmful advice that led to dire outcomes. Most of the cases have been consolidated in California state court under the heading “ChatGPT Product Liability Cases.”

“AI has nothing to do with tobacco, and an algorithm has nothing to do with the way a cigarette is designed, but the law is built by analogy,” said Ted Mermin, the executive director of the Center for Consumer Law and Economic Justice at the University of California, Berkeley. “What the plaintiffs’ firms are doing is utilising well-established legal principles in a new product area.”

The Scotts, for example, claim that OpenAI rushed out ChatGPT-4o without proper safety testing and with design defects, such as the sycophantic endorsement of users’ bad ideas, that caused a foreseeable harm to their son.

An OpenAI spokesperson, Drew Pusateri, wrote in a statement to the Times, “These interactions took place on an earlier version of ChatGPT that is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts. The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”

So far, OpenAI has filed only one legal response to the wave of lawsuits, in a case brought by the parents of Adam Raine, a 16-year-old who died by suicide after discussing it extensively with ChatGPT. The company said that its technology had not caused the tragedy, that it was a service and not a product subject to such liability laws, and that the Raines’ demand that the chatbot not discuss self-harm would violate the First Amendment.

Eric Goldman, a technology law professor at Santa Clara University, said the company’s claims had merit. Most of the cases against OpenAI are claiming complex psychological effects that the chatbot had on people. “Trying to reverse-engineer a single cause is just not possible in most cases,” he said.

Goldman said the algorithms behind the chatbots were surfacing information and expressive ideas and should be seen as a form of constitutionally protected speech. It’s not the chatbots themselves whose speech is protected, he said, but the humans behind them, as if the chatbots are books and their engineers the authors.

“There’s a set of decision-makers at every chatbot company that make a bunch of choices about what gets indexed, how to manage the index and what gets output,” he said. “And those humans are doing the same kinds of things that humans do with other publishers.”

“People’s lives are being upended by this technology,” Jain said. “The original sin is really allowing these companies to launch these products without proper safety testing and oversight.”

She now employs four lawyers. The inbound messages about victims keep coming, she said, and so will more lawsuits. – ©2026 The New York Times Company

This article originally appeared in The New York Times.

Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim’s (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to befrienders.org.my/centre-in-malaysia for a full list of numbers nationwide and operating hours, or email sam@befrienders.org.my.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Others Also Read