EU Artificial Intelligence Act – political agreement reached on Friday 8 December
The EU AI Act is a far-reaching and comprehensive legal framework, ensuring AI in the EU is safe, respects fundamental rights, while businesses can thrive and expand.
The European Parliament and Council negotiators reached a provisional agreement on the proposal on harmonized rules on artificial intelligence (AI), the so-called Artificial Intelligence Act (‘AI Act’).
The draft regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.
The AI Act will regulate AI across the EU Member States and will be applicable directly in the same way across all Member States, based on a future-proof definition of AI, which is aligned with the definition of the OECD. However, the AI Act will also have important extraterritorial implications, as it covers all AI systems impacting people residing in the EU, regardless of where systems are developed or deployed.
Main elements of the provisional agreement
The main elements of the provisional agreement can be summarized as follows:
a) Classification of AI systems as high-risk and prohibited AI practices
The compromise agreement provides rules on high-impact general-purpose AI systems that can cause systemic risk in the future, as well as on high-risk AI systems. A wide range of high-risk AI systems would be authorized, but subject to a set of requirements and compliance obligations along the AI value chain, to gain access to the EU market. For some uses of AI, risk is deemed unacceptable and, therefore, these systems will be banned from the EU.
b) Foundation models
Foundation models will be subject to a regime of specific transparency obligations. For high impact foundation models, a stricter regime is introduced.
c) a revised system of governance with some enforcement powers at EU level
National competent market surveillance authorities will supervise the implementation of the new rules at national level, while the creation of a new European AI Office within the European Commission will ensure coordination at European level. The European Commission will also receive support from the AI Board, which will be comprised of representatives of the member states.
d) Law enforcement exemptions
The list of prohibitions will be extended, but with the possibility to use remote biometric identification (‘RBI’) by law enforcement authorities in public spaces, subject to safeguards.
Real-time” RBI would comply with strict conditions and its use would be limited in time and location, for specific purposes only.
e) Specific obligations for high-risk systems
For high-risk AI systems, clear obligations were agreed and clarified further, in particular in relation to obligations affecting the various actors in the AI value chain. Among other requirements, deployers of high-risk AI systems need to conduct a mandatory fundamental rights impact assessment prior to putting an AI system into use.
Sanctions
Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.
Entry into force
While some technical elements of the AI Act are still to be finalized over the coming weeks, the political agreement is now subject to formal approval by the European Parliament and the Council and will enter into force 20 days after publication in the Official Journal. The AI Act would then become applicable two years after its entry into force, except for some specific provisions.
To bridge the transitional period before the AI Act becomes generally applicable, the Commission will be launching an AI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.
Next steps
Companies and organizations globally should already monitor whether the AI systems they are providing, deploying, or using are in scope of the AI Act, and - if they are in scope -, determine their risk classification and related compliance obligations. Providers, deployers, and users of so-called “high-risk” and general-purpose AI systems (including foundation models and generative AI) will also need to ensure that effective AI governance frameworks and compliance systems are in place.
It will be important to understand how the AI Act interacts with relevant existing and emerging rules and regulations in the EU (e.g. GDPR) and in other jurisdictions, as well as with voluntary codes and principles (e.g., U.S. Executive Order on AI, G-7 AI Principles and Code of Conduct).
We will keep you updated on any significant developments relating to this matter.
If you want to read and learn more about this topic, please feel free to reach out to the digital law experts from EY Law.