Most AI chatbots have murky safety provisions, researchers find


An investigation into 30 chatbots, including major companies from the US, showed that just four have published formal safety and evaluation documents. — Franziska Gabbert/dpa

WASHINGTON: Artificial intelligence (AI) chatbots are increasingly being used by people to do everything from answering and summarizing emails and texts the recipient never reads to deciding on travel itineraries, products and meals.

But such sub-contracting of thought and effort is risky not only because it may reduce mental engagement: researchers at universities including Stanford and the Massachusetts Institute of Technology have found that most major bots do not have risk assessments in place, amid a "significant transparency gap" across the industry.

"Many developers tick the AI safety box by focusing on the large language model underneath, while providing little or no disclosure about the safety of the agents built on top," said the University of Cambridge’s Leon Staufer, who led the latest update of the team’s AI Agent Index.

The index includes 30 bots available for public use and by developers with a market capitalisation of at least US$1bil (RM3bil) – a list taking in the major US and European products and five from China.

Only seven of the 30 AI entities covered publish data from third-party testing, according to the researchers, who described such assessments as "the empirical evidence needed to rigorously assess risk."

The 30 leading AI bots are worse again when it comes to other checks, the team found, with only four coming with "formal safety and evaluation documents that cover everything from autonomy levels and behaviour to real-world risk analyses" – one fewer than the number that disclose internal safety assessments.

Only five have released details about "known security incidents," the team revealed - meaning that if other AI chatbots have been hacked, the operators may have kept quiet about it.

The findings follow the publication late last year of the Future of Life Institute's AI safety index, which found that chatbots lack suitable safety measures despite incidents of the bots seemingly encouraging people to harm themselves or break the law. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Are you nodding off? How AI could soon keep humans awake in robocars
Nvidia plans new chip to speed AI processing, WSJ reports
OpenAI reaches deal to deploy AI models on U.S. Department of War classified network
Are password managers safe? Not as much as you think, research shows
Anthropic says it will challenge Pentagon's supply chain risk designation in court
Study: People are overconfident they can tell AI-made faces from real
Canadian minister to meet with OpenAI's Altman to discuss safety measures after shooting
Trump says he is directing federal agencies to cease use of Anthropic technology
Exclusive-QIA, Visa and ADIA set to anchor SoftBank's PayPay IPO, sources say
Goldman bucks private credit redemption trend as AI disruption fears mount

Others Also Read