OpenAI CEO's ouster brings EU regulatory debate into focus


FILE PHOTO: Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023. REUTERS/Carlos Barria/File Photo

LONDON (Reuters) - As the European Union edges closer to passing a wide-ranging set of laws governing artificial intelligence, lawmakers and experts say the surprise ousting of OpenAI CEO Sam Altman underscores the need for strict rules.

Altman, cofounder of the startup that last year kicked off the generative AI boom, was abruptly fired by OpenAI’s board last week, sending shockwaves through the tech world and prompting employees to make threats of a mass resignation at the company.

Across the Atlantic, the European Commission, the European Parliament and the EU Council have been hashing out the fine print of the AI Act, a sweeping set of laws that would require some companies to complete extensive risk assessments and make data available to regulators.

In recent weeks, talks have hit stumbling blocks over the extent to which companies should be allowed to self-regulate.

Brando Benifei, one of two European Parliament lawmakers leading negotiations on the laws, told Reuters: “The understandable drama around Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders.

“Regulation, especially when dealing with the most powerful AI models, needs to be sound, transparent and enforceable to protect our society.”

On Monday, Reuters reported that France, Germany and Italy had reached an agreement on how AI should be regulated, a move expected to accelerate negotiations at the European level.

The three governments support "mandatory self-regulation through codes of conduct" for those using generative AI models, but some experts said this would not be enough.

Alexandra van Huffelen, Dutch minister for digitalisation, told Reuters the OpenAI saga underscored the need for strict rules.

She said: “The lack of transparency and the dependence on a few influential companies in my opinion clearly underlines the necessity of regulation.”

Meanwhile, Gary Marcus, an AI expert at New York University, wrote on social media platform X: "We can’t really trust the companies to self-regulate AI where even their own internal governance can be deeply conflicted.

"Please don't gut the EU AI Act; we need it now more than ever."

(Reporting by Martin Coulter and Supantha Mukherjee; Editing by Susan Fenton)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Smart cars at heightened risk of attack, IT security firm says
French payment group Worldline to replace CEO, cuts outlook
Parents, your kids want you to get off the phone
Tech companies commit to fighting harmful AI sexual imagery by curbing nudity from datasets
OpenAI releases new AI model for ChatGPT with eye on safety, accuracy
UAE arrests and turns over suspected child sexual exploiter to the Philippines, officials say
Americans increasingly oppose a ban on TikTok in the United States
In an English college, AI is already replacing teachers
Uncle Roger’s RM18 fried rice: Will you bite?
Adobe forecasts downbeat quarterly earnings on cautious tech spending

Others Also Read