Most AI chatbots have murky safety provisions, researchers find


An investigation into 30 chatbots, including major companies from the US, showed that just four have published formal safety and evaluation documents. — Franziska Gabbert/dpa

WASHINGTON: Artificial intelligence (AI) chatbots are increasingly being used by people to do everything from answering and summarizing emails and texts the recipient never reads to deciding on travel itineraries, products and meals.

But such sub-contracting of thought and effort is risky not only because it may reduce mental engagement: researchers at universities including Stanford and the Massachusetts Institute of Technology have found that most major bots do not have risk assessments in place, amid a "significant transparency gap" across the industry.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

First Robot: Melania Trump brings droid to White House event
Why AI means animal testing is not always needed to trial new medicines
Day of reckoning arrives for social media after US court loss
Teens get probation after using AI to create fake nudes of classmates
Revolut to base 40% of its global workforce in India by 2026
Apple rolls out age checks for UK users
Munich Re: AI making cyber attacks costlier and more effective
Nanya Technology shares surge 10% after $2.5 billion fundraising
Nvidia-backed Reflection AI eyes $25 billion valuation, WSJ reports
Hundreds of teens to trial social media bans in UK pilot project

Others Also Read