Ash, an AI chatbot, is seen on a phone. Ash is part of an increasingly contentious effort to provide automated alternatives to traditional therapy. — Kendrick Brinson/The New York Times
After having suicidal thoughts this year, Brittany Bucicchia checked herself into a mental health facility near her home in rural Georgia.
When she left several days later, her doctors recommended that she continue treatment with a psychotherapist. But she was wary of traditional therapy after frustrating experiences in the past, so her husband suggested an alternative he had found online – a therapy chatbot, Ash, powered by artificial intelligence.
Bucicchia said it had taken a few days to get used to talking and texting with Ash, which responded to her questions and complaints, provided summaries of their conversations and suggested topics she could think about. But soon, she started leaning on it for emotional support, sharing the details of her daily life as well as her hopes and fears.
At one point, she said, she recounted a difficult memory about her time in a mental health facility. The chatbot replied that if she was having suicidal thoughts, she should contact a professional and gave her a toll-free number to call.
“There was a learning curve,” Bucicchia, 37, said. “But it ended up being what I needed. It challenged me, asked a lot of questions, remembered what I had said in the past and went back to those moments when it needed to.”
Bucicchia’s experience is part of an experimental and growing effort to provide automated alternatives to traditional therapy, using chatbots. That has led to questions about whether these chatbots, which are built by tech startups and academics, should be regulated as medical devices. On Thursday, the Food and Drug Administration held its first public hearing to explore that issue.
For decades, academics and entrepreneurs saw promise in artificial intelligence as a tool for personal therapy. But concerns over therapy chatbots and whether they can adequately handle delicate personal issues have mounted and become increasingly contentious. This summer, Illinois and Nevada banned the use of therapy chatbots because the technologies were not licensed like human therapists. Other states are exploring similar bans.
The unease has sharpened as people have formed emotional connections with general-purpose chatbots such as OpenAI’s ChatGPT, sometimes with dangerous consequences. In August, OpenAI was sued over the death of Adam Raine, a 16-year-old in Southern California who had spent many hours talking with ChatGPT about suicide. His family accused the company of wrongful death.
(The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems. The companies have denied the claims.)
Unlike ChatGPT, Ash, the chatbot that Bucicchia used, was designed specifically for therapy by Slingshot AI, a New York startup that employs clinical psychologists and others with experience in the development of AI therapy. Slingshot has not tested Ash, which is free to use, as part of a clinical trial.
How well therapy chatbots work is unclear. In September, as scrutiny of both general-purpose chatbots and chatbots designed for therapy rose, Slingshot stopped marketing Ash as a therapy chatbot. The startup now promotes the app for “mental health and emotional well-being.”
At Thursday’s FDA hearing, Derrick Hall, a clinical psychologist with Slingshot, cited a study between the company and academics that found more than 70% of people who used Ash reported that it made them less lonely and feel more socially connected.
“AI designed for well-being can provide enormous benefit at low risk,” he said.
Michelle E. Tarver, an FDA official, said at the hearing that generative AI technologies “hold significant promise” for addressing mental health crises, especially for people in rural areas and other locations where care was not easily available. But the agency was still working to better understand the technology’s benefits and risks, she said.
Automated alternatives to therapy date to the mid-1960s, when Joseph Weizenbaum, a researcher at Massachusetts Institute of Technology, built an automated psychotherapist called Eliza. When users typed their thoughts onto a computer screen, Eliza asked them to expand their thoughts or repeated their words in the form of a question. Weizenbaum wrote that he was surprised that people treated Eliza as if it were human, sharing their problems and taking comfort in its responses.
Eliza kicked off a decades-long effort to build chatbots for psychotherapy. In 2017, a startup created by Stanford University psychologists and AI researchers offered Woebot, an app that allowed people to discuss their problems and track their moods. Woebot’s responses were scripted, so it adhered to the established techniques of what is called cognitive behavioral therapy, a common form of conversational therapy.
More than 1.5 million people eventually used Woebot. It was discontinued this year, in part because of regulatory struggles.
By contrast, Ash is based on what is called a large language model, which learns by analyzing large amounts of digital text culled from the internet. By pinpointing patterns in hundreds of Wikipedia articles, news stories and chat logs, for instance, it can generate humanlike text on its own.
Slingshot then honed the technology for helping people as they struggle with mental health. By analyzing thousands of conversations between licensed therapists and their patients, Ash learned to behave in similar ways.
As people interacted with the chatbot, the company also refined Ash’s behavior by showing it what it was doing right and wrong. Almost daily, Slingshot’s team of psychologists and technologists rated the chatbot’s responses and, in some cases, rewrote them to demonstrate the ideal way of dealing with particular situations. They acted like tutors, giving Ash pointed feedback to improve its behavior.
Still, like all chatbots trained in this way, Ash sometimes does the unexpected or makes mistakes.
“A lot of therapy is about externalising your internal world – getting things out, saying them. Ash allows me to do that,” said Randy Bryan Moore, 35, a social worker in Richmond, Virginia, who has used the chatbot since the summer. “But it is not a replacement for human connection.”
In March, psychologists at Dartmouth College published the results of a clinical trial of TheraBot, a chatbot the university had developed for more than six years. The chatbot significantly reduced users’ symptoms of depression, anxiety and eating disorders, the trial found. Still, the Dartmouth psychologists believed their technology was not ready for widespread use.
“The evidence for its effectiveness is strong,” said Nicholas Jacobson, one of the clinical psychologists who oversaw the TheraBot project. “Our focus now is on a highly careful, safety-conscious and evidence-based approach to making it available.” – ©2025 The New York Times Company
This article originally appeared in The New York Times.
