Study says AI chatbots need to fix suicide response, as family sues over ChatGPT role in boy's death


Mehrotra said there's no easy answer for AI chatbot developers 'as they struggle with the fact that millions of their users are now using it for mental health and support'. — AP

SAN FRANCISCO: A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people.

The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for "further refinement” in OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Windows running slow? Microsoft’s 11 quick fixes to speed up your PC
Meta to let users in EU 'share less personal data' for targeted ads
Drowning in pics? Tidy your Mac library with a few clicks
Flying taxis to take people to London airports in minutes from 2028
Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show

Others Also Read