Facing teen suicide suits, Character.AI rolls out safety measures


A conversation between Swell Setzer III and a chatbot on Character.AI displayed on his mother’s laptop in New York. The mother of Setzer, who was 14-years-old when he killed himself in February, says he became obsessed with a chatbot on Character.AI before his death. — The New York Times

SAN FRANCISCO: Character.AI, once one of Silicon Valley’s most promising AI startups, announced on Dec 12 new safety measures to protect teenage users as it faces lawsuits alleging its chatbots contributed to youth suicide and self-harm.

The California-based company, founded by former Google engineers, is among several firms offering AI companions – chatbots designed to provide conversation, entertainment and emotional support through human-like interactions.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Scale of social media use in pre-school children ‘deeply alarming’
Opinion: Are QR codes computer-friendly?
Pick your handle: WhatsApp preparing reservation queue for usernames
'Kirby Air Riders': A 'Mario Kart' alternative for the Switch 2
Meta delays release of Phoenix mixed-reality glasses to 2027, Business Insider reports
Opinion: How can you tell if something’s been written by ChatGPT? Let’s delve
'Stealing from a thief': How ChatGPT helped Delhi man outsmart scammer, make him 'beg' for forgiveness
A US man was indicted for allegedly cyberstalking women. He says he took advice from ChatGPT.
Apple, Tesla accused of profiting from horrific abuses, environmental destruction
Exclusive-How Netflix won Hollywood's biggest prize, Warner Bros Discovery

Others Also Read