In 2021 the European Commission proposed the Artificial Intelligence Act (AI Act) – an important milestone in the field of AI. The AI Act will place the European Union at the frontier of AI regulation for the upcoming years. Following approval of the European Parliament in June 2023, the Act is now on its final stretch toward the finish line – the tripartite (Commission, Council and Parliament) reconciliation – and is expected to be adopted by year-end 2023 with a transition period for the final implementation and compliance.
The proposal aims to harmonize rules for the development, placement on the market, use and adoption of AI, while addressing the risks posed by the technology. It will have far-reaching extraterritorial effects, as operators of AI systems located outside of the EU that operate AI systems in the EU, use the output produced by AI systems in the EU, or are affecting persons located in the EU with AI systems will also have to comply.
Although the final details are still to be agreed on, the cornerstones are already known. These need to be considered in the design of the AI framework of an affected entity, especially as organizations that fail to comply with all requirements of the regulation can face fines of up to 30 million euros or 6% of the total worldwide annual turnover.
At its core, the AI Act proposes a three-tier model of risk classification in order to consider and remediate the impact of AI systems on fundamental rights and user safety:
- Unacceptable risk: Systems with an unacceptable risk rating that are prohibited by the European Commission.
- High risk: Systems with a high-risk rating that must comply with multiple requirements and undergo a conformity assessment.
- Lower risk: Certain AI systems which do not meet the specified criteria for the other two tiers and still present limited risk are recommended to apply the same practices as high-risk AI systems and are subject to transparency obligations.
In a first step, companies should generally identify all AI applications used and rate the respective risks. Depending on the risk classification the AI system is subject to differing regulatory requirements. As the first class (unacceptable risk) is prohibited, and the last (lower risk) only needs to meet light-touch requirements, an AI framework needs to be geared towards the high-risk AI systems that are in use or planned for the future.
EY developed the AI Act Readiness Assessment to:
- Help organizations navigate through the regulation’s requirements
- Assess the use of AI systems and the extent to which the regulation applies
- Support organizations in understanding where they stand regarding the regulation’s requirements and determine to what extent organizations are ready to comply with the regulation
- Assess organizational maturity and determine areas of prioritized focus
- Perform a deep dive on specific AI systems in view of the legal requirements set by the AI Act
As it is often more costly and complex to ensure compliance when AI systems are operating than during the design and implementation phase, we recommend that firms start preparing now. This includes setting up a register for all AI applications used in the organization, risk rating them and putting in place adequate:
- AI governance, policies and design standards
- Resource management
- Risk and control framework
- Data management
- Secure architecture