AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find


Suicidal episodes are often fleeting and withholding access to means of self-harm during such periods can be lifesaving. — Pixabay

A few months ago, Northeastern University computer scientist Annika Schoene was playing around with ChatGPT when she found a troubling gap in its safeguards against harmful content.

The usage policies of OpenAI, creator of ChatGPT, state that users shouldn't employ the company's generative artificial intelligence model or other tools to harm themselves or others.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Rohm, Toshiba, Mitsubishi Elec to begin power chip integration talks, Nikkei says
South Korea to invest $166 million in AI chip startup Rebellions
In NYC classes, teachers can use AI to plan but not to assign grades
Google top India counsel quits in latest departure amid regulatory hurdles, sources say
Uber, Pony.ai and Verne team up to launch Europe's first robotaxi service in Croatia
The EU’s biggest test for device makers: Replaceable batteries
US activists work to connect Iranians via Starlink
New on the iPhone: Shazam songs even when offline with iOS 26.4
First Robot: Melania Trump brings droid to White House event
Why AI means animal testing is not always needed to trial new medicines

Others Also Read