AI companies' safety practices fail to meet global standards, study shows


Open AI and Anthropic logos are seen in this illustration taken on September 12, 2025. REUTERS/Dado Ruvic/Illustration

Dec 3 (Reuters) - The safety practices of major artificial intelligence companies, such as Anthropic, OpenAI, xAI and Meta, are "far short of emerging global standards," according to a new edition of Future of Life Institute's AI safety index released on Wednesday.

The institute said the safety evaluation, conducted by an independent panel of experts, found that while the companies were busy racing to develop superintelligence, none had a robust strategy for controlling such advanced systems.

The study comes amid heightened public concern about the societal impact of smarter-than-human systems capable of reasoning and logical thinking, after several cases of suicide and self-harm were tied to AI chatbots.

"Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards," said Max Tegmark, MIT professor and Future of Life president.

The Future of Life Institute is a nonprofit organization that has raised concerns about the risks intelligent machines pose to humanity. Founded in 2014, it was supported early on by Tesla CEO Elon Musk.

In October, a group including scientists Geoffrey Hinton and Yoshua Bengio called for a ban on developing superintelligent artificial intelligence until the public demands it and science paves a safe way forward.

A Google DeepMind spokesperson said the company will "continue to innovate on safety and governance at pace with capabilities" as its models become more advanced, while xAI said "Legacy media lies", in what seemed to be an automated response.

"We share our safety frameworks, evaluations, and research to help advance industry standards, and we continuously strengthen our protections to prepare for future capabilities," an OpenAI spokesperson said.

The company invests heavily in frontier safety research and "rigorously" tests its models, the spokesperson added.

Anthropic, Meta, Z.ai, DeepSeek and Alibaba Cloud did not respond to requests for comment on the study.

(Reporting by Zaheer Kachwala in Bengaluru, additional reporting by Arnav Mishra in Bengaluru; Editing by Shinjini Ganguli and Alan Barona)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

WhatsApp says Italian surveillance company tricked around 200 users into downloading spyware
Exclusive-Intel looks to put millions more into SambaNova startup chaired by CEO Tan
Singapore gets robotaxis as Grab, WeRide launch driverless cars
Related Digital nears $16 billion financing for Oracle data center, source says
Analysis-SpaceX’s orbital data centers could face same hurdles as Microsoft’s abandoned undersea project
Italian bill proposes curbs on social media addiction
SpaceX IPO buzz lifts aerospace shares on spillover bets
Exclusive-SpaceX will host analyst day on April 21, source says
Factbox-Mega IPOs loom on Wall Street as Elon Musk's SpaceX confidentially files paperwork
Factbox-SpaceX's business and finances: rockets, satellite communications and budding AI

Others Also Read