AI companies' safety practices fail to meet global standards, study shows


Open AI and Anthropic logos are seen in this illustration taken on September 12, 2025. REUTERS/Dado Ruvic/Illustration

Dec 3 (Reuters) - The safety practices of major artificial intelligence companies, such as Anthropic, OpenAI, xAI and Meta, are "far short of emerging global standards," according to a new edition of Future of Life Institute's AI safety index released on Wednesday.

The institute said the safety evaluation, conducted by an independent panel of experts, found that while the companies were busy racing to develop superintelligence, none had a robust strategy for controlling such advanced systems.

The study comes amid heightened public concern about the societal impact of smarter-than-human systems capable of reasoning and logical thinking, after several cases of suicide and self-harm were tied to AI chatbots.

"Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards," said Max Tegmark, MIT professor and Future of Life president.

The Future of Life Institute is a nonprofit organization that has raised concerns about the risks intelligent machines pose to humanity. Founded in 2014, it was supported early on by Tesla CEO Elon Musk.

In October, a group including scientists Geoffrey Hinton and Yoshua Bengio called for a ban on developing superintelligent artificial intelligence until the public demands it and science paves a safe way forward.

A Google DeepMind spokesperson said the company will "continue to innovate on safety and governance at pace with capabilities" as its models become more advanced, while xAI said "Legacy media lies", in what seemed to be an automated response.

"We share our safety frameworks, evaluations, and research to help advance industry standards, and we continuously strengthen our protections to prepare for future capabilities," an OpenAI spokesperson said.

The company invests heavily in frontier safety research and "rigorously" tests its models, the spokesperson added.

Anthropic, Meta, Z.ai, DeepSeek and Alibaba Cloud did not respond to requests for comment on the study.

(Reporting by Zaheer Kachwala in Bengaluru, additional reporting by Arnav Mishra in Bengaluru; Editing by Shinjini Ganguli and Alan Barona)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Factbox-From trend to mainstay: AI to cement its place at the core of 2026 investment strategies
Data and AI firm Databricks valued at $134 billion in latest funding round
Business leaders agree AI is the future. They just wish it worked right now
Review: Defend a moving city in 'Monsters Are Coming' for PC and Xbox
Chip crunch to curb smartphone output in 2026, researcher says
App developers urge EU action on Apple fee practices
'Tomb Raider' Lara Croft to star in two new games 30 years on
Merriam-Webster’s 2025 word of the year is 'slop'
US communities push back against encroaching e-commerce warehouses
Will OpenAI be the next tech giant or next Netscape?

Others Also Read