How artificial intelligence can mimic our emotions, increasing the risk of bias


Some people use chatbots like ChatGPT as a form of psychological support to deal with everyday worries. — Photography SewcreamStudio/Getty Images/AFP Relaxnews

Everyone experiences anxiety at some point. But what about artificial intelligence? A Swiss study claims that artificial intelligence models can also be affected by negativity.

Researchers at the University of Zurich and the University Hospital for Psychiatry Zurich have uncovered a surprising phenomenon: artificial intelligence models such as ChatGPT seem to react to disturbing content. When exposed to accounts of accidents or natural disasters, their behavior becomes more biased. "ChatGPT-4 is sensitive to emotional content, with traumatic narratives increasing reported anxiety,” they explain in their study, published in the journal npj Digital Medicine.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Russia restricts FaceTime, its latest step in controlling online communications
Studies: AI chatbots can influence voters
LG Elec says Microsoft and LG affiliates pursuing cooperation on data centres
Apple appoints Meta's Newstead as general counsel amid executive changes
AI's rise stirs excitement, sparks job worries
Australia's NEXTDC inks MoU with OpenAI to develop AI infrastructure in Sydney, shares jump
SentinelOne forecasts quarterly revenue below estimates, CFO to step down
Hewlett Packard forecasts weak quarterly revenue, shares fall
Microsoft to lift productivity suite prices for businesses, governments
Bank of America expands crypto access for wealth management clients

Others Also Read