Facing teen suicide suits, Character.AI rolls out safety measures


A conversation between Swell Setzer III and a chatbot on Character.AI displayed on his mother’s laptop in New York. The mother of Setzer, who was 14-years-old when he killed himself in February, says he became obsessed with a chatbot on Character.AI before his death. — The New York Times

SAN FRANCISCO: Character.AI, once one of Silicon Valley’s most promising AI startups, announced on Dec 12 new safety measures to protect teenage users as it faces lawsuits alleging its chatbots contributed to youth suicide and self-harm.

The California-based company, founded by former Google engineers, is among several firms offering AI companions – chatbots designed to provide conversation, entertainment and emotional support through human-like interactions.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

SAP shares hit 17-month low as AI-driven selloff burns $130 billion
Intel results to spotlight turnaround efforts as AI data centers boost chip demand
Brazil central bank liquidates Banco Master's Will as Mastercard suspends cards
Netflix co-CEOs go on defensive over $83 billion Warner Bros deal
Exclusive-Meta's new AI team delivered first key models internally this month, CTO says
Taiwan's GlobalWafers preparing for phase two expansion at Texas plant
European telcos to get unlimited radio spectrum under EU draft law
OpenAI seeks to increase global AI use in everyday life
Elon Musk and Ryanair keep escalating an online war of words
Netflix shares drop 7% in Europe after Q4 results

Others Also Read