Fluttering European Union flags in a row in outside European Parliament building in Brussels

Why companies must prepare now for the new EU AI Act

Related topics

The EU AI Act is a landmark law set to regulate the use of artificial intelligence in the EU and beyond. Find out how it applies to you.


In brief

  • The EU AI Act applies to all organizations that develop, deploy, import or distribute AI models or systems to the EU.
  •  AI systems are classified into risk-based tiers with varying levels of compliance obligations.
  •  Organizations must take the required steps to comply or face significant fines.

The European Union (EU) Artificial Intelligence (AI) Act, which enters into force on 1st August 2024, unifies AI regulation across the single market’s 27 member states. The legislation has several broad aims. It seeks to use legal mechanisms to protect the fundamental rights and safety of the EU population when exposed to AI; to encourage investment and innovation in the technology; and to develop a single, unfragmented market for “lawful, safe and trustworthy AI applications”.

Who is impacted by the Act?

The Act applies to all AI systems that impact people in the EU and to all organizations across the value chain whether developers, deployers, importers or distributors. Crucially, the Act is extraterritorial, meaning that entities do not need to be based within the EU for it to be applicable. In most instances the Act does not apply retroactively, but in some limited cases AI models and systems placed on the market before it became law will have to comply.

How does the EU AI Act affect you?

Understand key elements of the act and its compliance obligations.

What are the key points of the Act?

The Act requires organizations to fulfil certain obligations, depending largely on the level of risk posed by how their AI systems are used. There are two risk classification systems, one for general-purpose AI (GPAI) models and second for other AI systems.

Risks for AI systems are classified into three main tiers: prohibited, high risk and minimal risk. Each tier has different compliance requirements, as summarized below.

Risk tier
Description
Compliance level
Use case examples include:

Prohibited

AI systems that pose an unacceptable risk to the safety, security and fundamental rights of people.

Prohibited

1. Social scoring which could lead to detrimental treatment

 

2. Emotional recognition systems in the workplace

 

3. Predictive policing of individuals

 

(Some exemptions will apply)

High risk

Permitted, subject to compliance with the requirements of the EU AI Act (including conformity assessments before being placed on the market).

Significant

Use of AI in: 

 

1. Recruitment

 

2. Biometric identification surveillance systems

 

3. Safety of critical infrastructure (e.g., energy and transport)

Minimal risk

Permitted, subject to specific transparency and disclosure obligations where use poses a limited risk.

Limited

1. Chatbots

 

2. Visual or audio content manipulated by AI

Minimal risk

Permitted, with no additional requirements where use poses minimal risk.

Minimal

1. Photo-editing software

 

2. Product-recommender systems

 

3. Spam filtering software

GPAI models, including generative AI, also follow a tiered approach but within a separate classification framework applying additional transparency requirements. The strictest obligations apply to the most powerful GPAI models posing a “systemic risk”.

Tier
Description
Compliance level

Base level risk

Models meeting the GPAI definition

Limited transparency obligations (further details available here)

Systemic risk

High-impact GPAI models posing a systemic risk are provisionally identified based on cumulative amount of computing power used for training (with power greater than 1025 floating point operations).

A model can also be classified in this tier based on a decision of the Commission that a general-purpose AI model has capabilities or impact equivalent to those above.

Significant obligations (further details available here)

When does the Act come into force?

The EU AI Act enters into force from 1stAugust 2024, however it’s tiered compliance obligations will take effect in stages over several years. For instance, organizations must comply with EU AI Act prohibitions within six months (2nd February 2025) and ensure they are compliant with most GPAI obligations within one year (2nd August 2025). Organizations must meet most other obligations within two years (2nd August 2026).

What are the penalties for non-compliance?

The penalties are significant. A company may be fined up to €35 million or 7% of its worldwide annual turnover (revenue), whichever is higher, for breaching prohibited AI system requirements. Non-compliance with high-risk AI system requirements could result in a fine of up to €15 million or 3% of worldwide annual turnover, again depending on which is greatest. Supplying incorrect or misleading information to regulators could result in a fine of up to €7.5 million or 1% of worldwide annual turnover.

It is useful to note that most AI systems already on the market (other than prohibited ones) will not have to retroactively comply, unless they have been subject to substantial modification after the dates of application.

What should you do to prepare for the Act?

Consider the following actions:

1. Create an inventory of all the AI systems you have – or plan to have – and determine whether any fall within the scope of the EU AI Act.

2. Assess and categorize the in-scope AI systems to determine their risk classification and identify the applicable compliance requirements.

3. Understand your organization’s position in the relevant AI value chains, the associated compliance obligations and how these will be met. Embed compliance within all responsible functions along the value chain, throughout the lifecycle of the AI system.

4. Consider what other questions, risks and opportunities the EU AI Act poses to your operations and strategy.

  • Risks include interaction with other EU or non-EU regulations, including those on data privacy.
  • Opportunities could involve access to AI research and development channels. The EU is, for example, establishing “sandboxes” where innovators, small-and-medium-sized enterprises, and others can choose to experiment, test, train and validate their systems under regulatory supervision before taking them to market.

5. Develop and execute a plan to ensure that the appropriate accountability and governance frameworks, risk management and control systems, quality management, monitoring and documentation are in place when the Act comes into force. 

Summary 

Complying with the EU AI Act will require a great deal of preparation for organizations in scope, particularly those developing higher risk AI systems and general-purpose AI. However, the EU AI Act establishes a common baseline for trust, transparency and accountability for this rapidly developing technology.

Related articles

How to navigate global trends in Artificial Intelligence regulation

Learn why the AI regulatory approach of eight global jurisdictions have a vital role to play in the development of rules for the use of AI.

How your organization can have confidence in the opportunities AI brings

As interest in ethical AI explodes, the debate is shifting away from “trust” and towards “confidence”, helping unlock valuable use cases. Learn more.

AI and Web3 mix could reshape business models

Organizations should explore the potential of combining AI and Web3 to expedite technology adoption and reinvent the rules of doing business. Learn more.

    About this article