EU AI Act published: The countdown for compliance begins

The EU AI Act has now entered into force and the countdown to the AI Act having legal effect starts now. Organisations need to review the AI systems they use, their AI governance, policies, and strategies to ensure compliance.


In brief

  • The first obligations take effect within months.
  •  AI usage will now be regulated, and organisations need to identify the AI systems in use and implement necessary compliance measures.

On 1 August 2024 the European Union's Artificial Intelligence Act (“AI Act”) entered into force. This starts the clock on obligations taking effect under the AI Act. The first obligations come into force within months and organisations need to be ready.

The AI Act is designed to establish a harmonised legal framework for the regulation of certain types of AI systems and general-purpose AI models. The publication of the AI Act marks a new era of digital governance within the EU regulating the use of AI within the single market for the first time. As organisations across the EU begin to grapple with the implications of the AI Act, understanding the timeline and obligations it imposes is crucial.

The AI Act carries with it the potential for significant fines for non-compliance with the highest potential fines being 7% of worldwide turnover or €35,000,000.

The AI Act places obligations on those who are working with AI systems and GPAI models. Both deployers and providers of AI systems have extensive obligations, however the bulk of the obligations lie with providers. Providers are generally the developer of the AI system while the deployer is generally the organisation that uses an AI system under their own authority. However, the AI Act also imposes obligations on others including importers of AI systems and distributors. The definition of distributor will capture those who resell products in the EU that contain AI systems.

The AI Act categorises the various uses of AI systems into different levels of risk. The AI Act specifically defines the categories of prohibited, high-risk and GPAI. While not explicitly set out, there are additional obligations which we have categorised as limited and low risk. AI systems falling into the low-risk category can choose to comply with voluntary codes of conduct. AI systems in the limited risk category have certain transparency obligations. The highest level of risk is deemed unacceptable, these systems are prohibited under the AI Act.

EY.ai Maturity Model

Strategically plan to close GenAI gaps, develop an efficient roadmap and responsibly harness new capabilities.

Read more

High-risk AI systems must undergo rigorous testing and risk management. Organisations must ensure data governance and transparency with the act containing strict record keeping requirements. A detailed risk management system will also be required. The maintenance of detailed documentation to facilitate audits and ensure traceability will be crucial.

 

Key Dates

Key dates

What are the next steps for businesses?

  1. Identify: organisations need to identify what if any AI systems they have in use. An organisation wide review process should be undertaken.
  2. Systems and procedures: organisations should establish proper procedures for the approval and use of AI systems within the organisation, these procedures should take into account the AI Act obligations.
  3. Educate: organisations are required from February 2025 to ensure their staff are sufficiently trained on the use of AI systems.
  4. Categorise: once the various AI systems are identified they should be reviewed and categorised against the AI Act risk classifications.
  5. Identify gaps: the resulting categorised systems should be reviewed for compliance with the obligations under the relevant risk category.
  6. Implement: any identified compliance gaps should be remediated.

Get in touch to see how we can help you navigate the EU AI Act

Summary 

With the AI Act coming into force, companies must be vigilant about the timeline and obligations it imposes. Organisations must determine their risk classification under the AI Act, which categorises AI systems as low risk, limited risk, GPAI, high-risk or prohibited. They also need to determine if they are a provider, deployer, importer, or distributor, so they know what the new law requires them to do.

EY.ai - a unifying platform

A platform that unifies human capabilities and artificial intelligence to help you drive AI-enabled business transformations.

EY.ai - a unifying platform

Related articles

Is your business prepared for the EU AI Act

The impending EU AI Act warrants early review of AI governance and strategies, aiming to regulate AI use and balance business needs with citizen rights.

How Irish organisations can embed trust along the data continuum

Trusted data is foundational for enhanced competitiveness and better decision making as organisations embrace increased use of AI, analytics. Find out how.

    About this article

    Contributors