Some of the risk categories likely relevant to the daily operations of most companies:
High-risk AI systems: These are systems that pose risks to health, safety, or fundamental rights. The areas of application and specific purposes covered are listed in the annex to the legislation and will be continuously updated after coming into effect. Currently, this list includes employment, HR management, and access to self-employment (e.g., to select applicants, make promotion decisions, or monitor employee performance and behavior), management and operation of critical infrastructure (e.g., to manage electricity networks or train traffic), and law enforcement, migration, and asylum (e.g., to simplify decision-making). Biometric monitoring of individuals also falls under the high-risk category. An AI system is also considered high risk if it falls under EU-harmonized safety standards that require a conformity assessment procedure (i.e., an assessment and confirmation/certification). Standards applicable to machines, medical devices, toys, and elevators fall into this category.
There are extensive requirements outlined in the EU AI Act for these high-risk systems:
- Risk management: A risk management system to identify, assess, and mitigate risks must be established and applied to the AI systems.
- Data quality: There are specific requirements for the data used for training, testing, and validation. For example, this data must be relevant, representative, error-free, and complete.
- Documentation: Automatically generated logs of operations and comprehensive technical documentation must be maintained.
- Transparency and information requirements: There are specific requirements for transparency and information regarding users. These are defined in more detail in the annex to the law. Additionally, individuals must be able to monitor the AI system and intervene in its operation if necessary.
- Technical requirements: Certain requirements regarding IT security must be adhered to. There must be an appropriate level of accuracy and robustness. Again, the exact requirements are defined in more detail in the annex to the AI law.
Those operating the AI-solution are primarily responsible for ensuring compliance with the requirements. They are responsible for evaluating compliance with the requirements and ensuring that the AI is monitored by a human. Furthermore, high-risk AI-systems must be registered in an EU database. After the system is put into operation, the operator remains responsible for monitoring the AI-systems and reporting incidents to the relevant authorities.
AI-systems with transparency requirements: These are systems that interact with people, perform emotion recognition or biometric classification, or generate artificial content that mimics real people, places, or objects ("deep fakes"). This content must be specifically identified so that users are aware they are using AI and to prevent illegal content from being produced.
Low-risk AI-systems: These are systems that are not expected to pose any risk to users or their data, such as AI-solutions used in sandboxes for testing purposes. General product safety requirements apply here. The law also recommends the voluntary adoption of codes of conduct based on the regulations for high-risk AI-systems, but these can also go further.
The above selection shows that classifying AI-systems into individual risk categories is only possible at a systematic level. Especially for high-risk systems, there are strict requirements and documentation obligations that mean information must be obtained and documented in a structured manner.
In addition to the above systems, there are also specific requirements for what the legislation calls General Purpose AI (GPAI). The final definition of such systems has not yet been published, but generally, they include AI-models trained through "self-supervised learning" on large amounts of data for various purposes rather than one specific task. They include large language models such as OpenAI's GPT-3 and GPT-4 used in ChatGPT and elsewhere, or DALL-E, also developed by OpenAI, and Bard developed by Google.
Such models are subject to additional requirements regarding transparency, compliance with copyright requirements, and the publication of a "detailed summary" of the data used to train the models. What exactly is meant by a detailed summary is yet to be determined.
If such GPAI-systems are built with significant computing power (the EU cites a limit of 10^25 FLOPs), there are further requirements regarding the evaluation of the models, the need for analysis of systemic risks associated with the models, and the obligation to document and report incidents involving such GPAIs to the European Commission.
Step 3: Create clear timelines
Even if it is unclear whether the timeline for the EU AI Act's implementation can be met, it is advisable to conduct an initial risk assessment of the systems used in the company now to prepare for the law's coming into effect and, if applicable, to implement measures to comply with the law's requirements. This is especially recommended for complex organizations or those with a large number of AI applications. Early preparation prevents complications later and allows the company to issue rules applicable to the development or purchase of AI applications in a timely manner. Implementing changes that require intervention in applications already in use will be complex and time-consuming and will require detailed discussions with the relevant operational units. Large-scale tasks announced shortly before the deadline can quickly lead to dissatisfaction and frustration. Clear timelines and instructions make it easier to prepare the work.
Various bodies such as the European standardization organizations CEN and CENELEC, as well as ISO and IEC and various industry organizations are already working on standards and guidelines for best practices regarding the AI law. Furthermore, the EU AI Pact from the European Commission calls on companies to implement the requirements, help shape best practices, and exchange ideas about the AI law with other affected companies. Through its membership in many of these bodies, EY is happy to help you better understand the requirements and support you in their implementation.