How artificial intelligence can mimic our emotions, increasing the risk of bias


Some people use chatbots like ChatGPT as a form of psychological support to deal with everyday worries. — Photography SewcreamStudio/Getty Images/AFP Relaxnews

Everyone experiences anxiety at some point. But what about artificial intelligence? A Swiss study claims that artificial intelligence models can also be affected by negativity.

Researchers at the University of Zurich and the University Hospital for Psychiatry Zurich have uncovered a surprising phenomenon: artificial intelligence models such as ChatGPT seem to react to disturbing content. When exposed to accounts of accidents or natural disasters, their behavior becomes more biased. "ChatGPT-4 is sensitive to emotional content, with traumatic narratives increasing reported anxiety,” they explain in their study, published in the journal npj Digital Medicine.

Although these AI models do not feel emotions per se, they are capable of simulating them and detecting those of their users in order to adapt to them in real time. This is precisely the focus of research into "affective computing,” a field that explores how AI can interact in a more human-like way.

This ability to mimic human emotions implies sensitivity to negative stimuli. Confronted repeatedly with anxiety-provoking content, ChatGPT can develop a kind of "artificial anxiety,” influencing its responses and reinforcing certain biases, notably racist or sexist biases. This discovery raises questions about the role of conversational agents in the field of mental health.

The rise of generative AI has transformed the public's perception of artificial intelligence. Some people have begun to use chatbots as a form of psychological support to deal with everyday worries. On social networks, stories of interactions with chatbots like ChatGPT for therapeutic purposes are multiplying, prompting some developers to create specialised AI tools. Such is the case of the American website Character.ai, which offers a chatbot called "Psychologist,” billed as an aid for coping with life's difficulties. In the same spirit, the Elomia app offers 24/7 access to an AI-powered mental health chatbot. "Conversation with Elomia feels like talking to a real human being,” promises the startup.

Not without risk

Using artificial intelligence in this way raises ethical questions, particularly when AI models can adopt biased behaviors when faced with negativity. However, the study's authors claim that mindfulness-based strategies have a beneficial effect on AI chatbots like ChatGPT. To assess this impact, the researchers introduced prompts inspired by breathing exercises and guided meditation, similar to those used in human therapy. The results showed that, under these conditions, ChatGPT generated more objective and neutral responses than during interactions without this type of intervention.

WOMEN WHO REFUSE TO SETTLE

The researchers suggest that these models could be programmed to automatically apply emotional regulation techniques before responding to users in distress. But artificial intelligence cannot replace a real mental health professional. "For people who are sharing sensitive things about themselves, they’re in difficult situations where they want mental health support, [but] we’re not there yet that we can rely totally on AI systems instead of psychology, psychiatric and so on," said Ziv Ben-Zion, one of the study’s authors and a postdoctoral researcher at the Yale School of Medicine, speaking to Fortune magazine.

Small everyday worries can conceal far more serious problems and, in extreme cases, lead to real emotional distress. Last October, a Florida mother filed a lawsuit against Character.AI after the suicide of her 14-year-old son, who was a regular user of the app. She claims that the chatbot had a negative influence on his behavior and contributed to his mental distress. The company has since reinforced its safety measures.

Far from replacing mental health professionals, the challenge is rather to make artificial intelligence tools like ChatGPT valuable allies, capable of lightening their workload and optimising patient support. A well-calibrated AI model could, for example, simplify administrative management or prepare the ground before a consultation. It remains to be seen how far this technology can evolve, and how it can be integrated without altering the very essence of mental health care. – AFP Relaxnews

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Others Also Read


All Headlines:

Want to listen to full audio?

Unlock unlimited access to enjoy personalise features on the TheStar.com.my

Already a member? Log In