Some people use chatbots like ChatGPT as a form of psychological support to deal with everyday worries. — Photography SewcreamStudio/Getty Images/AFP Relaxnews
Everyone experiences anxiety at some point. But what about artificial intelligence? A Swiss study claims that artificial intelligence models can also be affected by negativity.
Researchers at the University of Zurich and the University Hospital for Psychiatry Zurich have uncovered a surprising phenomenon: artificial intelligence models such as ChatGPT seem to react to disturbing content. When exposed to accounts of accidents or natural disasters, their behavior becomes more biased. "ChatGPT-4 is sensitive to emotional content, with traumatic narratives increasing reported anxiety,” they explain in their study, published in the journal npj Digital Medicine.
