Only the following three of the risk categories listed in the AI Act are probably relevant for most companies’ day-to-day business:
High-risk AI systems are systems that pose risks to people’s health, safety or fundamental rights. The areas of use and specific purposes that this comprises are listed in the annex to the legislation and will be updated continuously after it enters into force. At the moment, this list comprises employment, HR management and access to self-employment (e.g. to select applicants, decide on promotions or check the performance and behavior of employees), management and operation of critical infrastructure (e.g. to manage electricity networks or rail traffic) and law enforcement, migration and asylum (e.g. to simplify decisions). Biometric monitoring of people also falls into the high-risk category. An AI system is also high risk if it falls under EU harmonized safety standards for which a conformity assessment procedure (i.e. an assessment and confirmation/certification) is required. Standards applying to machinery, medical devices, toys and elevators fall into this category.
There are extensive requirements set out in the AI Act for these high-risk systems:
1. Risk management
A risk management system to identify, assess and reduce risks must be put in place and applied to the AI systems.
2. Data quality
There are certain requirements for the data used for training, testing and validation. For example, this data must be relevant, representative, free of errors and complete.
3. Documentation
There must be automatically generated logs of the operations and comprehensive technical documentation.
4. Transparency and information requirements
There are particular requirements for transparency and information with respect to users. These are defined in greater detail in the annex to the Act. In addition, humans must be able to oversee the AI system and intervene in its operations, if necessary.
5. Technical requirements
Certain requirements relating to IT security must be met. There must be an appropriate degree of accuracy and robustness. Here too, the exact requirements are defined in more detail in the annex to the AI Act.
Those operating the AI solution are primarily responsible for ensuring that the requirements are complied with. They are responsible for evaluating compliance with the requirements and ensuring the AI is monitored by a human being. In addition, high-risk AI systems must be registered in an EU database. After the system has gone into use, the operator continues to be responsible for monitoring the AI systems and reporting incidents to the relevant authorities.
AI systems with transparency requirements are systems that interact with people, carry out emotion recognition or biometric classification, or generate artificial content that mimics real people, places or objects (“deep fakes”). This content must be specially identified so that users are aware they are using AI and to prevent illegal content from being produced.
Low-risk AI systems are defined as those that are not expected to pose any risk to users or their data, for example, AI solutions used in sandboxes for testing purposes. The requirements of general product safety apply to these. The Act also recommends the voluntary adoption of codes of conduct which are based on the regulations for high-risk AI systems but can also go beyond them.
The above selection shows that classifying AI systems into the individual risk categories is only possible at a systematic level. Particularly for high-risk systems there are strict requirements and documentation obligations that mean that information needs to be obtained and documented in a structured way.
Alongside the above systems there are also specific requirements for what the legislation refers to as General Purpose AI (GPAI). The final definition of such systems has not yet been published, but generally they include AI models that have been trained through “self-supervised learning” on large volumes of data for a range of different purposes rather than one specific task. They include, for example, large language models such as OpenAI’s GPT-3 and GPT-4 used in ChatGPT and elsewhere, or DALL-E, also developed by OpenAI, and Bard developed by Google.
Additional requirements apply to such models with respect to transparency, compliance with copyright requirements and the publication of a “detailed summary” of the data used to train the models. What exactly is meant by a detailed summary remains to be seen.
If such GPAI systems have been built with large computing power (the EU mentions a limit of 1025 FLOPs (floating point operations per second), there are further requirements regarding the evaluation of the models, the requirement for analysis of systemic risks associated with the models and the obligation to document and report incidents involving such GPAIs to the European Commission.
Step 3: Create clear timetables
Even if it is unclear whether the timetable for the entry into force of the AI Act can be met, it makes sense to carry out an initial risk assessment of the systems in use in the company now to be prepared for the legislation entering into force and, if applicable, to implement measures already to meet the requirements of the Act. This is particularly recommended for complex organizations or those with a large number of AI applications. Preparing early will avoid complications later and enable the company to issue rules that apply when developing or buying AI applications in good time. In particular, implementing changes that require an intervention in applications that have already been implemented and used will be complex and time consuming and require detailed discussions with the relevant operating units. Large-scale jobs announced shortly before the deadline can quickly cause dissatisfaction and annoyance. Clear timetables and instructions will make it easier to prepare the work.
Already various bodies such as the European standards organizations CEN and CENELEC, as well as ISO and IEC and various industry bodies are working on standards and best practice guidelines relating to the AI Act. In addition, the European Commission’s EU AI Pact appeals to companies to implement the requirements already, help to shape best practice and exchange ideas on the AI Act with other companies affected by it.