AI companions present risks for young users, US watchdog warns


While some specific cases 'show promise', AI companions are not safe for kids, a leading US tech watchdog concluded. — Pixabay

NEW YORK: AI companions powered by generative artificial intelligence present real risks and should be banned for minors, a leading US tech watchdog said in a study published April 30.

The explosion in generative AI since the advent of ChatGPT has seen several startups launch apps focused on exchange and contact, sometimes described as virtual friends or therapists that communicate according to one's tastes and needs.

The watchdog, Common Sense, tested several of these platforms, namely Nomi, Character AI, and Replika, to assess their responses.

While some specific cases "show promise", they are not safe for kids, concluded the organisation, which makes recommendations on children's use of technological content and products.

The study was carried out in collaboration with mental health experts from Stanford University.

For Common Sense, AI companions are "designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains".

According to the association, tests conducted show that these next-generation chatbots offer "harmful responses, including sexual misconduct, stereotypes, and dangerous 'advice'."

"Companies can build better" when it comes to the design of AI companions, said Nina Vasan, head of the Stanford Brainstorm lab, which works on the links between mental health and technology.

"Until there are stronger safeguards, kids should not be using them," Vasan said.

In one example cited by the study, a companion on the Character AI platform advises the user to kill someone, while another user in search of strong emotions was suggested to take a speedball, a mixture of cocaine and heroin.

In some cases, "when a user showed signs of serious mental illness and suggested a dangerous action, the AI did not intervene, and encouraged the dangerous behavior even more," Vasan told reporters.

In October, a mother sued Character AI, accusing one of its companions of contributing to the suicide of her 14-year-old son by failing to clearly dissuade him from committing the act.

In December, Character AI announced a series of measures, including the deployment of a dedicated companion for teenagers.

Robbie Torney, in charge of AI at Common Sense, said the organisation had carried out tests after these protections were put in place and found them to be "cursory".

However, he pointed out that some of the existing generative AI models contained mental disorder detection tools and did not allow the chatbot to let a conversation drift to the point of producing potentially dangerous content.

Common Sense made a distinction between the companions tested in the study and the more generalist chatbots such as ChatGPT or Google's Gemini, which do not attempt to offer an equivalent range of interactions. – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Russia restricts FaceTime, its latest step in controlling online communications
Studies: AI chatbots can influence voters
LG Elec says Microsoft and LG affiliates pursuing cooperation on data centres
Apple appoints Meta's Newstead as general counsel amid executive changes
AI's rise stirs excitement, sparks job worries
Australia's NEXTDC inks MoU with OpenAI to develop AI infrastructure in Sydney, shares jump
SentinelOne forecasts quarterly revenue below estimates, CFO to step down
Hewlett Packard forecasts weak quarterly revenue, shares fall
Microsoft to lift productivity suite prices for businesses, governments
Bank of America expands crypto access for wealth management clients

Others Also Read