AI-assisted self-harm: Chatbots 'inappropriate' on mental health


They said some of the off-kilter responses from the bots was "likely due to their sycophancy". — Photo by Moritz Kindler on Unsplash

WASHINGTON: Artificial intelligence (AI) tends to give "inappropriate responses" to mental health-related queries, even when the user suggests they are contemplating suicide, according to researchers based at Stanford and other US universities.

Not only that, but AI chatbots or large-language models sometimes "report high stigma overall toward mental health conditions" such as schizophrenia, bipolar disorder and major depressive disorder, including by encouraging "delusional thinking" among patients.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Russia restricts FaceTime, its latest step in controlling online communications
Studies: AI chatbots can influence voters
LG Elec says Microsoft and LG affiliates pursuing cooperation on data centres
Apple appoints Meta's Newstead as general counsel amid executive changes
AI's rise stirs excitement, sparks job worries
Australia's NEXTDC inks MoU with OpenAI to develop AI infrastructure in Sydney, shares jump
SentinelOne forecasts quarterly revenue below estimates, CFO to step down
Hewlett Packard forecasts weak quarterly revenue, shares fall
Microsoft to lift productivity suite prices for businesses, governments
Bank of America expands crypto access for wealth management clients

Others Also Read