Researchers reboot push for AI safety after Paris summit bust


Many safety-focused scientists were disappointed by February's Paris AI summit, where the French hosts largely left aside threats to home in on hoped-for economic boons. — Reuters

PARIS: Experts researching threats stemming from artificial intelligence agreed on key work areas needed to contain dangers like loss of human control or easily accessible bioweapons in a report published on May 8.

Many safety-focused scientists were disappointed by February's Paris AI summit, where the French hosts largely left aside threats to home in on hoped-for economic boons.

But "the mood was exactly the opposite of Paris" at a gathering of experts in Singapore in late April, said MIT researcher and conference organiser Max Tegmark, president of the Future of Life Institute that charts existential risks.

"A lot of people came up to me and said that they had gotten their mojo back now... there's hope again," he told AFP.

In a report put together at the conference, the experts name three overlapping work areas to focus on faced with ever-more-capable AIs: assessing risk from AI and its applications; developing AI that is safe and trustworthy by design; and monitoring deployed AI – ready to intervene if alert signals flash.

There is "global convergence around the technical challenges in AI safety", said leading researcher Yoshua Bengio, who helped compile the "Singapore Consensus on Global AI Safety Research Priorities" report.

"We have work to do that everybody agrees should be done. The Americans and the Chinese agree," Tegmark added.

The AI safety community can be a gloomy place, with dire predictions of AI escaping human control altogether or proferring step-by-step instructions to build biological weapons – even as tech giants plough hundreds of billions into building more powerful intelligences.

In "AI 2027", a widely-read scenario recently published online by a small group of researchers, competition between the United States and China drives Washington to cede control over its economy and military to a rogue AI, ultimately resulting in human extinction.

Online discussions pore over almost weekly hints that the latest AI models from major companies such as OpenAI or Anthropic could be trying to outwit researchers probing their capabilities and inner workings, which remain largely impenetrable even to their creators.

Next year's governmental AI summit in India is widely expected to echo the optimistic tone of Paris.

But Tegmark said that even running in parallel to politicians' quest for economic payoffs, experts' research can influence policy towards enforcing safety on those building and deploying AI.

"The easiest way to get the political will is to do the nerd research. We've never had a nuclear winter. We didn't need to have one in order for (Soviet leader Mikhail) Gorbachev and (US President Ronald) Reagan to take it seriously" – and agree on nuclear arms restraint, he said.

Researchers' conversations in Singapore were just as impactful as the Paris summit was, "but with the impact going in a very, very different direction," Tegmark said. – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

From Zelda to Civ VI: understanding game complexity
From traditional mats to virtual arenas: The rise of VR taekwondo in Malaysia
UK regulation of cryptoassets to start in October 2027, finance ministry says
Windows running slow? Microsoft’s 11 quick fixes to speed up your PC
Meta to let users in EU 'share less personal data' for targeted ads
Drowning in pics? Tidy your Mac library with a few clicks
Flying taxis to take people to London airports in minutes from 2028
Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity

Others Also Read