How artificial intelligence can mimic our emotions, increasing the risk of bias


Some people use chatbots like ChatGPT as a form of psychological support to deal with everyday worries. — Photography SewcreamStudio/Getty Images/AFP Relaxnews

Everyone experiences anxiety at some point. But what about artificial intelligence? A Swiss study claims that artificial intelligence models can also be affected by negativity.

Researchers at the University of Zurich and the University Hospital for Psychiatry Zurich have uncovered a surprising phenomenon: artificial intelligence models such as ChatGPT seem to react to disturbing content. When exposed to accounts of accidents or natural disasters, their behavior becomes more biased. "ChatGPT-4 is sensitive to emotional content, with traumatic narratives increasing reported anxiety,” they explain in their study, published in the journal npj Digital Medicine.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show
Netflix’s $72 billion Warner Bros deal faces skepticism over YouTube rivalry claim
Pakistan to allow Binance to explore 'tokenisation' of up to $2 billion of assets
Analysis-Musk's Mars mission adds risk to red-hot SpaceX IPO
Analysis-Oracle-Broadcom one-two punch hits AI trade, but investor optimism persists

Others Also Read