Govts, tech firms vow to cooperate against AI risks at Seoul summit


Also on the agenda in Seoul was ensuring that AI is inclusive and open to all. — AP

SEOUL: More than a dozen countries and some of the world's biggest tech firms pledged on Wednesday to cooperate against the potential dangers of artificial intelligence, including its ability to dodge human control, as they wrapped up a global summit in Seoul.

AI safety was front and centre of the agenda at the two-day gathering. In the latest declaration, more than two dozen countries including the United States and France agreed to work together against threats from cutting-edge AI, including "severe risks".

Such risks could include an AI system helping "non-state actors in advancing the development, production, acquisition or use of chemical or biological weapons", said a joint statement from the nations.

These dangers also include an AI model that could potentially "evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation", they added.

The ministers' statement followed a commitment on Tuesday by some of the biggest AI companies, including ChatGPT maker OpenAI and Google DeepMind, to share how they assess the risks of their tech, including what is considered "intolerable".

The 16 tech firms also committed to not deploying a system where they cannot keep risks under those limits.

The Seoul summit, co-hosted by South Korea and Britain, was organised to build on the consensus reached at the inaugural AI safety summit last year.

"As the pace of AI development accelerates, we must match that speed... if we are to grip the risks," UK technology secretary Michelle Donelan said.

"Simultaneously, we must turn our attention to risk mitigation outside these models, ensuring that society as a whole becomes resilient to the risks posed by AI."

The summit also saw a separate commitment – the so-called Seoul AI Business Pledge – from a group of tech companies including South Korea's Samsung Electronics and US titan IBM, to develop AI responsibly.

AI is "a tool in the hands of humans. And now is our moment to decide how we're going to use it as a society, as companies, as governments," Christina Montgomery, IBM's Chief Privacy and Trust Officer, told AFP on the sidelines of the summit.

"Anything can be misused, including AI technology," she added. "We need to put guardrails in place, we need to put protections in place, we need to think about how we're going to use it in the future."

Seeking consensus

AI's proponents have heralded it as a breakthrough that will improve lives and businesses around the world, especially after the stratospheric success of ChatGPT.

However, critics, rights activists and governments have warned that the technology can be misused in a wide variety of ways, including election manipulation through AI-generated disinformation such as "deepfake" pictures and videos of politicians.

Many have called for international standards to govern the development and use of AI. But experts at the Seoul summit warned that AI poses a huge challenge to regulators because it is rapidly developing.

"Dealing with AI, I expect to be one of the biggest challenges that governments all across the world will have over the next couple of decades," said Markus Anderljung, head of policy at the UK-based non-profit Centre for the Governance of AI.

Jack Clark, co-founder of the AI startup Anthropic, said consensus on AI safety cannot be left to tech firms alone, and that government and academic experts are needed in the conversation.

"At this summit, I've actually been asking every single person I met with: What's safety to you? And I've had a different answer from each person," Clark told reporters. "And I think that illustrates the problem."

"You aren't going to arrive at consensus by the companies alone, and if you did, I doubt it would be the correct one."

Also on the agenda in Seoul was ensuring that AI is inclusive and open to all.

It is not just the "runaway AI" of science fiction nightmares that is a huge concern, but also inequality, said Rumman Chowdhury, an AI ethics expert who leads the non-profit AI auditor Humane Intelligence.

"All AI is just built, developed and the profits reaped (by) very, very few people and organisations," she told AFP.

People in developing countries such as India "are often the staff that does the clean-up. They're the data annotators, they're the content moderators. They're scrubbing the ground so that everybody else can walk on pristine territory". – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Users will have control over generative AI in Windows
Will your device support Apple Intelligence?
Gamers really are better drivers, a new survey reveals
From schoolwork to relationship advice: why might young people use an AI chatbot?
OpenAI CEO says company could become benefit corporation- The Information
Google loses bid to end US antitrust case over digital advertising
Apple, Meta set to face EU charges under landmark tech rules, sources say
New York recovers $50 million for defrauded Gemini Earn crypto investors
Tempus AI shares jump 8% in strong Nasdaq debut as US IPO market thaws
Meta pauses AI models launch in Europe due to Irish request

Others Also Read