Clicking on computer screen

Is your business prepared for the EU AI Act

The EU AI Act, which has been approved by the EU Parliament, requires organisations to start reviewing their AI governance, policies, and strategies. The Act doesn't present blocks against AI usage but seeks to regulate it, striking a balance between organisational needs and citizen rights.


In brief

  • Establishing equilibrium between technology advancement and citizen protection is a primary focus of the AI Act, promoting responsible and ethical AI usage.
  • Despite not being in operation yet, organisations are advised to adapt early to the Act's requirements to ensure a smooth transition when it is enforced.

It may be two or three years before the AI Act comes into force, but organisations need to start preparing for it now. In particular, boards must start reviewing their organisations’ AI governance, policies and strategies and assess how they might be impacted by the AI Act when it does become law across the EU.

The anticipated lead time for the AI Act, once final, to come into force, will give organisations and their boards an opportunity to prepare well in advance.

The AI Act in its current form does not present barriers to the use of AI. It seeks to regulate its use, striking a balance between the needs of organisations and the rights of citizens and will not regulate all AI systems.

It sets out general principles which will be applicable to all AI systems. This involves stipulations tailored to high risk AI systems, such as:

The advent of the legislation should not in any way alter an organisation’s attitude to or intentions for the use of AI and it of itself should not deter any organisation from using it.

There is no doubt that it makes good business sense to use AI in almost every organisation. It can range from quite basic uses on online platforms to supporting advanced business functions such as supply chain management. It is sectoral and scale agnostic and has the potential to deliver significant benefits including improved performance and cost efficiencies.

Of course, some organisations may decide they are not going to use AI, for the time being at least. But this does not necessarily mean that they do not need to consider the use of AI in their business and have suitable governance in place. If employees are using publicly available generative AI systems to aid them in their work, the employer could find itself dealing with unintended consequences. It is therefore important that all organisations carry out detailed reviews to identify any use of AI both internally and across the value chain. Also, in terms of IT procurement and IT contracting, it is important to now understand what systems include AI and in particular AI systems that may be caught by the AI Act when it comes into force.

For those organisations already using or intending to use AI, it is important to understand that the legislation is extra-territorial in nature. It will apply across all EU countries and an organisation from outside of the EU that is planning to use AI that’s covered by the AI Act and supply into EU will need to comply. The alternative is to ensure that AI is used exclusively outside the EU.

Multinational companies will also have to map AI laws across the world and decide which are appropriate for them to comply with. The EU is probably the most advanced at present. In these circumstances, compliance with the AI Act may be sufficient to ensure compliance globally but this is a situation that needs monitoring and horizon scanning.

Board members and independent non-executive directors will need to focus on asking the right questions in relation to the use of AI and whether existing or future uses of AI systems may fall within the AI Act. They need to ensure they understand what the AI Act requires of their organisations and what that means in practice.

There has been a lot of talk about the ethics of AI and the avoidance of biased or discriminatory outputs is very important. In reviewing and using data in an ethical manner boards will also need to focus on matters such as data governance. For example, generative AI will use data of some kind, but not necessarily personal data covered by GDPR and other regulations. It will be important to understand precisely what kind of data these systems are using to ensure that legal and regulatory rules are complied with.

It may also be advisable to prioritise use cases for AI. For example, some use cases of AI in HR can present legal and regulatory issues due to the nature of the personal data involved and what the AI could do with it.

The AI Act will categorise AI systems into prohibited, high risk, moderate, and minimal risk activities. Prohibited activities include the use of subliminal techniques to influence behaviour, social screening, and the exploitation of vulnerabilities on certain grounds.

High risk AI systems are those that are fully regulated by the AI Act. They include specific use cases are set out in an annex to the AI Act but certain exceptions may apply such as where the AI system is intended to perform a narrow procedural task, is intended to improve the result of a previously completed human activity, is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or is intended to perform a perform a preparatory task to an assessment relevant for the use case.

AI systems more generally are subject to a lighter set of requirements while general purpose AI models (GenAI under the AI Act) and general purpose AI models with systemic risks have specific requirements.

For many organisations the key issue will be ensuring they are not inadvertently tripping into the high-risk category. This requires joined up thinking at the outset of an AI project and a high degree of collaboration with the suppliers of the system if it is being sourced externally. Another important aspect is to make sure nothing changes in terms of the project parameters at any stage during its development and implementation or, that over time, the use case itself does not change to high-risk.

AI literacy is another key consideration. It is critically important that employees know what they are dealing with and how to use it – and its limitations. Employees and indeed employers need to understand that GenAI outputs need to be checked as it can hallucinate or make things up quite convincingly. Employees will need to be trained on AI and the intensity and frequency of that training will depend on each individual’s level of interaction with the technology, the sophistication of the AI system and their position within the organisation.

Summary 

Looking ahead, the AI Act simply cannot be ignored. Organisations need to prepare for it regardless of whether they are planning to use AI or not. Regulatory and legal advice should be sought at the earliest stage possible. And while the Act does not present any legal or regulatory blockers to the business opportunities created by AI, failure to prepare for it could mean that organisations are forced to pivot in respect of AI when it comes into force.

EY.ai - a unifying platform

A platform that unifies human capabilities and artificial intelligence to help you drive AI-enabled business transformations.

EY.ai - a unifying platform

Related articles

How organisations can choose between buying and building AI systems

Organisations need to take a well-calibrated view of cost comparisons, regulations, risks to data to make the choice whether to buy or build. Find out how.

How AI can enhance the value of human ability to drive business growth

The human-tool teamwork can not just enhance customer experience, but also allow humans to focus on areas where they can provide superior value.

    About this article