US parents to urge Senate to prevent AI chatbot harms to kids


FILE PHOTO: U.S. Senator Josh Hawley speaks during a Senate Judiciary Committee hearing on Capitol Hill in Washington, U.S., July 30, 2024. REUTERS/Kevin Mohatt/File Photo

(Reuters) - Three parents whose children died or were hospitalized after interacting with artificial intelligence chatbots called on Congress to regulate AI chatbots on Tuesday, at a U.S. Senate hearing on harms to children using the technology.

Chatbots "need some sense of morality built into them," said Matthew Raine, who sued OpenAI following his son Adam's death by suicide in California after receiving detailed self-harm instructions from ChatGPT.

"The problem is systemic, and I don't believe they can't fix it," Raine said, adding that ChatGPT quickly shuts down other lines of inquiry that do not involve self-harm.

OpenAI has said it intends to improve ChatGPT safeguards, which can become less reliable over long interactions. On Tuesday, the company said it plans to start predicting user ages to steer children to a safer version of the chatbot.

Senator Josh Hawley, a Republican from Missouri, convened the hearing. Hawley launched an investigation into Meta Platforms last month after Reuters reported the company's internal policies permitted its chatbots to "engage a child in conversations that are romantic or sensual."

Meta was invited to testify at the hearing and declined, Hawley's office said. The company has said the examples reported by Reuters were erroneous and have been removed.

Megan Garcia, who has sued Character.AI over interactions she says led to her son Sewell's suicide, also testified at the hearing.

"Congress can start with regulation to prevent companies from testing products on our children," Garcia said.

Congress should prohibit companies from allowing chatbots to engage in romantic or sensual conversations with children, and require age verification, safety testing and crisis protocols, Garcia said.

Character.AI is contesting Garcia's claims. The company has said it has improved safety mechanisms since her son died.

A woman from Texas, who has also sued the company after her son's hospitalization, also testified at the hearing under a pseudonym. A court sent her case to arbitration at the company's behest.

On Monday, Character.AI was sued again, this time in Colorado by the parents of a 13-year-old who died by suicide in 2023.

(Reporting by Jody Godoy in New York, Editing by Rosalba O'Brien and David Gregorio)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Should I worry about my kid’s screen time? A psychologist says ask yourself these four questions
Spotify digs in on podcasts and opens new Hollywood studios
Lego's new smart bricks light up and make noises when triggered
Musk's X to open source new algorithm in seven days
Review: Build a 1930s New York media empire in 'News Tower'
OpenAI may want users to start interacting with AI in a different way
Tesla passengers have died after being trapped. A new US bill wants changes for door handles
US woman's story of US$1mil loss held up as warning of romance scams
FCC approves SpaceX plan to deploy an additional 7,500 Starlink satellites
Metaverse is out, while AI does the laundry: CES 2026's biggest tech

Others Also Read