Explainer-What is the European Union AI Act?

FILE PHOTO: A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration/File Photo

LONDON (Reuters) - The AI Act is expected to be a landmark piece of EU legislation governing the use of artificial intelligence in Europe that has been in the works for over two years.

Lawmakers have proposed classifying different AI tools according to their perceived level of risk, from low to unacceptable. Governments and companies using these tools will have different obligations, depending on the risk level.


The Act is expansive and will govern anyone who provides a product or a service that uses AI. The Act will cover systems that can generate output such as content, predictions, recommendations, or decisions influencing environments.

Apart from uses of AI by companies, it will also look at AI used in public sector and law enforcement. It will work in tandem with other laws such as the General Data Protection Regulation (GDPR).

Those using AI systems which interact with humans, are used for surveillance purposes, or can be used to generate "deepfake" content face strong transparency obligations.


A number of AI tools may be considered high risk, such as those used in critical infrastructure, law enforcement, or education. They are one level below "unacceptable," and therefore are not banned outright.

Instead, those using high-risk AIs will likely be obliged to complete rigorous risk assessments, log their activities, and make data available to authorities to scrutinise. That would be likely to increase compliance costs for companies.

Many of the "high risk" categories where AI use will be strictly controlled would be areas such as law enforcement, migration, infrastructure, product safety and administration of justice.


A GPAIS (General Purpose AI System) is a category proposed by lawmakers to account for AI tools with more than one application, such as generative AI models like ChatGPT.

Lawmakers are currently debating whether all forms of GPAIS will be designated high risk, and what that would mean for technology companies looking to adopt AI into their products. The draft does not clarify what obligations AI system manufacturers would be subject to.


The proposals say those found in breach of the AI Act face fines of up to 30 million euros or 6% of global profits, whichever is higher.

For a company like Microsoft, which is backing ChatGPT creator OpenAI, it could mean a fine of over $10 billion if found violating the rules.


While the industry expects the Act to be passed this year, there is no concrete deadline. The Act is being discussed by parliamentarians, and after they reach common ground, there will be a trilogue between representatives of the European Parliament, the Council of the European Union and the European Commission.

After the terms are finalised, there would be a grace period of around two years to allow affected parties to comply with the regulations.

(Reporting by Martin Coulter and Supantha Mukherjee; Editing by Bernadette Baum)

Subscribe now to our Premium Plan for an ad-free and unlimited reading experience!


Next In Tech News

Computex 2023: Nvidia's ACE creates characters with evolving personalities for unique gaming experiences
AI means everyone can now be a programmer, Nvidia chief says
US govt tries to claw back money so Jan 6 rioters don’t profit from online appeals
Nvidia, MediaTek partner on connected car technology
China cracks down on over a million social media posts, accounts
Ex-employee in SG misappropriated over 25,000 defective iPhones from firm, causing over RM23mil loss
OpenAI chief seeks to calm fears on job losses
AI political fakery sparks fears for US 2024 race
Climate scientists flee Twitter as hostility surges
Back to the office? How employers are struggling to satisfy their workers

Others Also Read