Around a third of teens in the US now say they have discussed important or serious matters with AI companions instead of real people. — Photo: Zacharie Scheurer/dpa
BERLIN: More than half of US teenagers regularly confide in artificial intelligence (AI) "companions" and more than 7 in 10 have done so at least once, despite warnings that chatbots can have negative mental health impacts and offer dangerous advice.
Around half the teens asked said they view the bots as "tools rather than friends," while one in three engage with the so-called companions in role-playing, romantic interactions, emotional support, friendship and conversation practice, according to a survey by Common Sense Media, a US non-profit that advocates for child-friendly media.
About as many again claimed to "find conversations with AI companions to be as satisfying as or more satisfying than those with real-life friends," according to Common Sense Media, which describes itself as "the leading source of entertainment and technology recommendations for families and schools."
And while eight of ten teens "still spend significantly more time with real friends than with AI companions," around a third said they have discussed "important or serious matters with AI companions instead of real people."
Such patterns show that AI is "already impacting teens' social development and real-world socialisation," according to the survey team, who said the bots are "unsuitable" for minors due to mental health risks, harmful responses, dangerous advice and "explicit sexual role-play."
Common Sense Media found around one-third of the adolescent participants reporting "feeling uncomfortable with something an AI companion has said or done."
"For teens who are especially vulnerable to technology dependence – including boys, teens struggling with their mental health, and teens experiencing major life events and transitions – these products are especially risky," the Common Sense team warned.
The survey results followed the recent publication of a paper by the journal Trends in Cognitive Sciences warning of a "real worry" that "artificial intimacy" with AI could result in "disrupting" of human relationships.
Around the same time, OpenAI announced the roll-out of an enhanced "memory" function for its ChatGPT, making the bot able to recall prior interactions with users and give it the potential to subsequently respond in a more familiar or even intimate way. – dpa
