In lawsuit over teen's death, US judge rejects arguments that AI chatbots have free speech rights


In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III. — Megan Garcia via AP

TALLAHASSEE, Florida: A federal judge on May 21 rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment – at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.

The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Google top India counsel quits in latest departure amid regulatory hurdles, sources say
Uber, Pony.ai and Verne team up to launch Europe's first robotaxi service in Croatia
The EU’s biggest test for device makers: Replaceable batteries
US activists work to connect Iranians via Starlink
New on the iPhone: Shazam songs even when offline with iOS 26.4
First Robot: Melania Trump brings droid to White House event
Why AI means animal testing is not always needed to trial new medicines
Day of reckoning arrives for social media after US court loss
Teens get probation after using AI to create fake nudes of classmates
Revolut to base 40% of its global workforce in India by 2026

Others Also Read