AI companies' safety practices fail to meet global standards, study shows


Open AI and Anthropic logos are seen in this illustration taken on September 12, 2025. REUTERS/Dado Ruvic/Illustration

Dec 3 (Reuters) - The safety practices of major artificial intelligence companies, such as Anthropic, OpenAI, xAI and Meta, are "far short of emerging global standards," according to a new edition of Future of Life Institute's AI safety index released on Wednesday.

The institute said the safety evaluation, conducted by an independent panel of experts, found that while the companies were busy racing to develop superintelligence, none had a robust strategy for controlling such advanced systems.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Facebook 'supreme court' admits 'frustrations' in five years of work
Russia restricts FaceTime, its latest step in controlling online communications
Studies: AI chatbots can influence voters
LG Elec says Microsoft and LG affiliates pursuing cooperation on data centres
Apple appoints Meta's Newstead as general counsel amid executive changes
AI's rise stirs excitement, sparks job worries
Australia's NEXTDC inks MoU with OpenAI to develop AI infrastructure in Sydney, shares jump
SentinelOne forecasts quarterly revenue below estimates, CFO to step down
Hewlett Packard forecasts weak quarterly revenue, shares fall
Microsoft to lift productivity suite prices for businesses, governments

Others Also Read