Summit host South Korea says world must cooperate on AI technology


Han Duck-soo, South Korean Prime Minister, gives a speech during the opening ceremony of the AI Global Forum in Seoul, South Korea, May 22, 2024. REUTERS/Kim Soo-hyeon

SEOUL (Reuters) - South Korea's science and information technology minister said on Wednesday the world must cooperate to ensure the successful development of AI, as a global summit on the rapidly evolving technology hosted by his country wrapped up.

The AI summit in Seoul, which is being co-hosted with Britain, discussed concerns such as job security, copyright and inequality on Wednesday, after 16 tech companies signed a voluntary agreement to develop AI safely a day earlier.

A separate pledge was signed on Wednesday by 14 companies including Alphabet's Google, Microsoft, OpenAI and six Korean companies to use methods such as watermarking to help identify AI-generated content, as well as ensure job creation and help for socially vulnerable groups.

"Cooperation is not an option, it is a necessity," Lee Jong-Ho, South Korea's Minister of Science and ICT (information and communication technologies), said in an interview with Reuters.

"The Seoul summit has further shaped AI safety talks and added discussions about innovation and inclusivity," Lee said, adding he expects discussions at the next summit to include more collaboration on AI safety institutes.

The first global AI summit was held in Britain in November, and the next in-person gathering is due to take place in France, likely in 2025.

Ministers and officials from multiple countries discussed on Wednesday cooperation between state-backed AI safety institutes to help regulate the technology.

AI experts welcomed the steps made so far to start regulating the technology, though some said rules needed to be enforced.

"We need to move past voluntary... the people affected should be setting the rules via governments," said Francine Bennett, Director at the AI-focused Ada Lovelace Institute.

AI services should be proven to meet obligatory safety standards before hitting the market, so companies equate safety with profit and stave off any potential public backlash from unexpected harm, said Max Tegmark, President of Future of Life Institute, an organisation vocal about AI systems' risks.

South Korean science minister Lee said that laws tended to lag behind the speed of advancement in technologies like AI.

"But for safe use by the public, there needs to be flexible laws and regulations in place."

(Reporting by Joyce Lee; Editing by Ed Davies)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Users will have control over generative AI in Windows
Will your device support Apple Intelligence?
Gamers really are better drivers, a new survey reveals
From schoolwork to relationship advice: why might young people use an AI chatbot?
OpenAI CEO says company could become benefit corporation- The Information
Google loses bid to end US antitrust case over digital advertising
Apple, Meta set to face EU charges under landmark tech rules, sources say
New York recovers $50 million for defrauded Gemini Earn crypto investors
Tempus AI shares jump 8% in strong Nasdaq debut as US IPO market thaws
Meta pauses AI models launch in Europe due to Irish request

Others Also Read