Young woman climbs staircase maze

EU AI Act Roadmap: What does the AI act mean for your organization?


The EU AI Act is coming soon. What does this mean and what steps should you take now?


In brief:

  • The EU AI Act is coming soon. What does this mean and what steps should you take now?
  • Key steps: understanding AI-solutions and implementing an AI governance system.
  • Establish clear lines of responsibility and ensure collaboration between IT, legal, compliance, and data protection teams.

What is the EU AI Act?

As a directive, it must be implemented by all EU member states. Most provisions will take effect 24 months after coming into force. With the conclusion of the trilogue on December 8, 2023, the European Council, the European Commission, and the European Parliament reached agreement on the key points of the AI law. The final draft was approved on March 13, 2024.


Clear lines of responsibility, inventories, and risk assessments are still rare.


Where does the EU AI Act cover?

The EU AI Act adopts a risk-based approach to regulating AI. AI systems are classified into different risk categories based on criteria such as their application and target audience. The law then defines measures that companies using or selling AI systems must implement, depending on the risk category. Required measures range from a simple disclosure obligation for low-risk applications (e.g., chatbots in customer service) to extensive documentation and duty of care for high-risk applications (e.g., applications used in HR) to a complete ban on prohibited applications or those posing an unacceptable risk (e.g., social scoring or systems attempting to subliminally manipulate the behavior of children or individuals with intellectual disabilities).

As the examples above show, AI can be used not only in technical departments but throughout the organization. Clear lines of responsibility, inventories, and risk assessments are still rare. However, these will be necessary once the law comes into effect, as severe penalties can be imposed for violations. The maximum fine is 7% of a company's global annual revenue from the previous year or €35 million. This is even higher than the potential fines under the GDPR, which can reach up to 4% of revenue or €20 million. How should companies prepare for the new law?


Stakeholders must understand the EU AI Act. Management establishes lines of responsibility. If necessary, a committee coordinates the implementation and reports to management.


The AI Roadmap

The AI Roadmap

1. Recognize AI's potential
Understand the principles and requirements of the EU AI Act.

2. Transform and plan
Inform management and establish clear lines of responsibility.

3. Data foundation and structure
Conduct a gap analysis and create a timeline for implementation to ensure a solid data foundation and structure.

4. External partnership ecosystems
Inventory AI-related software and classify AI systems by risk category in collaboration with external partners.

5. Internal AI expertise
Implement necessary measures and continuously assess progress using internal AI expertise.

6. Architecture and infrastructure
Prepare for the enforcement phase and ensure compliance by establishing a robust architecture and infrastructure.
 

Steps before the EU AI Act comes into effect

In the first step, stakeholders with AI responsibilities must familiarize themselves with the principles, requirements, and implications of the AI law. Management must be informed about the new legislation, as it must establish clear lines of responsibility for implementing the law's requirements. This process may not be straightforward, as there may be various functions within the company that could be held accountable. If it is impossible to assign clear responsibilities, a committee should be formed from these functions to coordinate implementation and report to management.

Relevant stakeholders include:

  • Chief Technology Officer (CTO): The IT or technology department is the obvious first point of contact for digital issues such as AI. However, meeting legal requirements through a governance system based on principles, measures, controls, and documentation requirements is usually not part of their core competencies.

  • General Counsel Office (GCO): Since the EU AI Act brings legal requirements, the legal department is also a possible option for locating responsibility for compliance with the law's requirements. However, technical expertise and maintaining a comprehensive governance system are typically not classic responsibilities of the legal department.

  • Chief Compliance Officer (CCO): Many elements of an AI governance system are already present in the compliance area through the compliance management system (CMS). Additionally, the compliance department has established channels through the CMS to communicate with various other departments. It is also familiar with issues such as risk classification, deciding on measures to take, and documenting them. Furthermore, the requirements of the EU AI Act can usually be integrated into an existing standard process of risk analysis and evaluation. Since the introduction of the GDPR, the compliance department typically also has a certain level of technical expertise, which is necessary for the AI law.

  • Data Protection Officer (DPO): In addition to the compliance department, the data protection function is also a suitable department to be held accountable for the EU AI Act, especially if it is expanded into a comprehensive data governance function that considers not only data protection but also its use, legal compliance of the IT-environment, and associated risks. Moreover, the data protection function is usually the interface between the CTO, GCO, and CCO, regardless of where it is located within the company's organization. Therefore, it may make sense to assign responsibility for the AI law to the DPO, provided it is an independent department, and to give this department additional resources to fulfill a comprehensive governance function. Otherwise, assigning responsibility for the issue to a CCO seems to be the most feasible option.

In addition to these roles, individuals with detailed technical and legal expertise on AI from operational departments or middle management should be included in the committee to ensure that detailed questions can be answered efficiently. Furthermore, the current situation within the company should be considered, such as existing responsibilities for related issues, any potential AI and/or innovation strategies, the areas where AI is used in the company, and the existing regulatory framework. These and other factors can certainly influence the assessment of how best to organize lines of responsibility.

After the approval of the AI Act, there will be a two-year period to comply with the requirements. The timeline of measures must be implemented within this period.


After the approval of the AI Act, there is a two-year period to comply with the requirements.


Timeline

In addition to defining lines of responsibility, a clear timeline of measures for implementing the requirements of the EU AI Act must be established based on a gap analysis. This gap analysis should also consider the suitability of processes and guidelines, current staff training, a comprehensive inventory of implemented AI solutions (e.g., through an AI inventory), and several other points that we will discuss below.

 

After the approval of the AI law, there will be a two-year period to comply with the requirements. The timeline of measures must be implemented within this period.

 

During the transition phase: implement, review, prepare

After the approval of the AI law, there will be a two-year period to comply with the requirements. The timeline of measures must be implemented within this period. The company must continuously monitor whether milestones within the timeline are being met.

 

Step 1: Inventory – Research what is currently in use

If the company does not have one, a comprehensive inventory of AI-related software, applications, algorithms, and solutions used in the company must be created. If available, the software asset management system within the IT department can be used to produce this. The installed software and associated licenses for all devices within a company are usually managed in this system.


The software asset management system checks AI functionality. Use data analysis and verify with surveys and expert sampling.


The software asset management system can check each installed software and determine whether it uses or can potentially use AI functionality. Instead of manually comparing, this can also be done through data analysis of softwaredatabases and lists. In case of doubt, your starting point should be that if software offers the possibility for AI, this functionality will be used, or that AI has indeed been implemented with this software.

This assumption must be verified and detailed through surveys/questionnaires and sampling. Since the range of possible software and their functionalities is vast, the design and evaluation of the surveys should be conducted by experts who may already have a list of potential AI functionalities per software program and can assess the potential of the relevant software at a technical level.

If an asset management system is not available, a broader survey can be conducted within the company via questionnaires to determine whether AI-capable software is being used. Regardless of the scope of the survey, this should not affect the execution of the survey. However, in a broad survey, the initial response is usually much lower than when the survey participants are limited based on installed software solutions. Therefore, the management resources needed for such a survey are significantly higher.

Regardless of the specific form of the survey, it should always be designed so that at the end, all required information about the software used in the company is available to perform the risk classification described below and to demonstrate this in case of inquiries from authorities.


Based on this initial classification and the resulting grouping of AI-solutions and non-AI solutions, programs must then be classified into the risk categories of the AI law.


Step 2: Classification into risk categories and determining requirements

Based on this initial classification and the resulting grouping of AI solutions and non-AI solutions, programs must then be classified into the risk categories of the EU AI Act. The requirements of the AI Act that must be implemented follow from this. Subsequently, the measures present in the company can be evaluated to ensure that the compliance management system meets the law's requirements. Any gaps identified must be addressed before the law comes into effect.

Risk categories and requirements

The definition of terms and risk categories was a key discussion point in preparing the draft legislation. The latest version of the law includes a definition of terms and risk categories in the annex. For example, the legislation uses the OECD definition of AI.

AI systems that are seen as a clear threat to EU citizens or endanger health, safety, or fundamental rights in a way that is not proportionate to the intended purpose are completely prohibited by the AI law. This includes, for example, social scoring by government agencies and systems that attempt to predict people's behavior for law enforcement purposes ("predictive policing").

Piramide - Risicocategorieën

Some of the risk categories likely relevant to the daily operations of most companies:

High-risk AI systems: These are systems that pose risks to health, safety, or fundamental rights. The areas of application and specific purposes covered are listed in the annex to the legislation and will be continuously updated after coming into effect. Currently, this list includes employment, HR management, and access to self-employment (e.g., to select applicants, make promotion decisions, or monitor employee performance and behavior), management and operation of critical infrastructure (e.g., to manage electricity networks or train traffic), and law enforcement, migration, and asylum (e.g., to simplify decision-making). Biometric monitoring of individuals also falls under the high-risk category. An AI system is also considered high risk if it falls under EU-harmonized safety standards that require a conformity assessment procedure (i.e., an assessment and confirmation/certification). Standards applicable to machines, medical devices, toys, and elevators fall into this category.

There are extensive requirements outlined in the EU AI Act for these high-risk systems:

  1. Risk management: A risk management system to identify, assess, and mitigate risks must be established and applied to the AI systems.
  2. Data quality: There are specific requirements for the data used for training, testing, and validation. For example, this data must be relevant, representative, error-free, and complete.
  3. Documentation: Automatically generated logs of operations and comprehensive technical documentation must be maintained.
  4. Transparency and information requirements: There are specific requirements for transparency and information regarding users. These are defined in more detail in the annex to the law. Additionally, individuals must be able to monitor the AI system and intervene in its operation if necessary.
  5. Technical requirements: Certain requirements regarding IT security must be adhered to. There must be an appropriate level of accuracy and robustness. Again, the exact requirements are defined in more detail in the annex to the AI law.
     

Those operating the AI-solution are primarily responsible for ensuring compliance with the requirements. They are responsible for evaluating compliance with the requirements and ensuring that the AI is monitored by a human. Furthermore, high-risk AI-systems must be registered in an EU database. After the system is put into operation, the operator remains responsible for monitoring the AI-systems and reporting incidents to the relevant authorities.

AI-systems with transparency requirements: These are systems that interact with people, perform emotion recognition or biometric classification, or generate artificial content that mimics real people, places, or objects ("deep fakes"). This content must be specifically identified so that users are aware they are using AI and to prevent illegal content from being produced.

Low-risk AI-systems: These are systems that are not expected to pose any risk to users or their data, such as AI-solutions used in sandboxes for testing purposes. General product safety requirements apply here. The law also recommends the voluntary adoption of codes of conduct based on the regulations for high-risk AI-systems, but these can also go further.

The above selection shows that classifying AI-systems into individual risk categories is only possible at a systematic level. Especially for high-risk systems, there are strict requirements and documentation obligations that mean information must be obtained and documented in a structured manner.

In addition to the above systems, there are also specific requirements for what the legislation calls General Purpose AI (GPAI). The final definition of such systems has not yet been published, but generally, they include AI-models trained through "self-supervised learning" on large amounts of data for various purposes rather than one specific task. They include large language models such as OpenAI's GPT-3 and GPT-4 used in ChatGPT and elsewhere, or DALL-E, also developed by OpenAI, and Bard developed by Google.

Such models are subject to additional requirements regarding transparency, compliance with copyright requirements, and the publication of a "detailed summary" of the data used to train the models. What exactly is meant by a detailed summary is yet to be determined.

If such GPAI-systems are built with significant computing power (the EU cites a limit of 10^25 FLOPs), there are further requirements regarding the evaluation of the models, the need for analysis of systemic risks associated with the models, and the obligation to document and report incidents involving such GPAIs to the European Commission.

Step 3: Create clear timelines

Even if it is unclear whether the timeline for the EU AI Act's implementation can be met, it is advisable to conduct an initial risk assessment of the systems used in the company now to prepare for the law's coming into effect and, if applicable, to implement measures to comply with the law's requirements. This is especially recommended for complex organizations or those with a large number of AI applications. Early preparation prevents complications later and allows the company to issue rules applicable to the development or purchase of AI applications in a timely manner. Implementing changes that require intervention in applications already in use will be complex and time-consuming and will require detailed discussions with the relevant operational units. Large-scale tasks announced shortly before the deadline can quickly lead to dissatisfaction and frustration. Clear timelines and instructions make it easier to prepare the work.

Various bodies such as the European standardization organizations CEN and CENELEC, as well as ISO and IEC and various industry organizations are already working on standards and guidelines for best practices regarding the AI law. Furthermore, the EU AI Pact from the European Commission calls on companies to implement the requirements, help shape best practices, and exchange ideas about the AI law with other affected companies. Through its membership in many of these bodies, EY is happy to help you better understand the requirements and support you in their implementation.


Through its membership in many of these organizations, EY is happy to help you better understand the requirements and support you in their implementation.


After coming into effect: implement, monitor, develop

Once the measures discussed above have been implemented, you will be prepared for the AI law's coming into effect. However, the work is not finished at this point. New AI implementations must be continuously monitored, and new applications must be developed and brought under the established processes and standards. Additionally, the process must be adjusted to expected regular changes in legislation. Ongoing training and education of staff should not be overlooked. Channels for advice and complaints must also be established and maintained.

To continuously monitor new software and applications, we recommend setting up a system of automatic checks for new software and their installations. If you install software that may fall under the EU AI Act's regulations, approval workflows can be sent from such a system to managers or specially trained personnel. This allows for prior assessment of AI use and then tracking the actual deployment in the system. Additionally, information or surveys can be automatically sent to users, or onlinetraining regarding the limitations of using the software can be automatically initiated.

It is important not only to rely on a rule- and control-based approach within the company but also to create a culture through training and communication that considers the potential risks and dangers of using AI-systems and works to minimize or fully mitigate them. Misunderstandings about reliability, a lack of transparency, or even unintended discrimination by algorithms pose significant risks. Responsible use of AI-offers the opportunity to use better, safer, and more capable models, thus gaining a competitive advantage in the market.

With a well-designed control system, many of the requirements of the AI law can be complied with and documented through the measures discussed above or similar measures. Furthermore, to assess the effectiveness of controls, cases from such a system can be extracted to efficiently conduct a risk-based audit. This enables you to prepare your company well for external audits of compliance with the AI law, which can become a USP compared to your competitors if implemented early.


Summary

Is your company using AI? Many people intuitively answer no, thinking of self-driving delivery vehicles or autonomous maintenance robots. But consider this: Is the marketing department using a system to send personalized ads to customers? What models does the sales department use for forecasts? How does HR filter applications? These are all potential areas where AI is used and will soon be regulated by the EU AI Act..


Navigate your AI journey

 

Get in touch with us to learn more about EY.ai, a holistic approach to AI.


About this article

Read more

How multidisciplinary collaboration on Responsible AI shapes a future with confidence

Discover insights from our EY AI Talks event on Responsible AI.

18 Feb 2025 Bernadette Wesdorp + 1

Inside the process: how EY navigated its own AI-driven transformation

The global EY organization (EY) embraced self-disruption, driving ambitions for responsible AI advancement. Read more.

06 Dec 2024

The Impact of Prompt Engineering in GenAI and Tax

Discover how prompt engineering boosts GenAI's tax capabilities. Start refining your AI prompts for precision & efficiency!

17 Sept 2024 Frank Putman
    You are visiting EY nl (en)
    nl en