The advent of the legislation should not in any way alter an organisation’s attitude to or intentions for the use of AI and it of itself should not deter any organisation from using it.
There is no doubt that it makes good business sense to use AI in almost every organisation. It can range from quite basic uses on online platforms to supporting advanced business functions such as supply chain management. It is sectoral and scale agnostic and has the potential to deliver significant benefits including improved performance and cost efficiencies.
Of course, some organisations may decide they are not going to use AI, for the time being at least. But this does not necessarily mean that they do not need to consider the use of AI in their business and have suitable governance in place. If employees are using publicly available generative AI systems to aid them in their work, the employer could find itself dealing with unintended consequences. It is therefore important that all organisations carry out detailed reviews to identify any use of AI both internally and across the value chain. Also, in terms of IT procurement and IT contracting, it is important to now understand what systems include AI and in particular AI systems that may be caught by the AI Act when it comes into force.
For those organisations already using or intending to use AI, it is important to understand that the legislation is extra-territorial in nature. It will apply across all EU countries and an organisation from outside of the EU that is planning to use AI that’s covered by the AI Act and supply into EU will need to comply. The alternative is to ensure that AI is used exclusively outside the EU.
Multinational companies will also have to map AI laws across the world and decide which are appropriate for them to comply with. The EU is probably the most advanced at present. In these circumstances, compliance with the AI Act may be sufficient to ensure compliance globally but this is a situation that needs monitoring and horizon scanning.
Board members and independent non-executive directors will need to focus on asking the right questions in relation to the use of AI and whether existing or future uses of AI systems may fall within the AI Act. They need to ensure they understand what the AI Act requires of their organisations and what that means in practice.
There has been a lot of talk about the ethics of AI and the avoidance of biased or discriminatory outputs is very important. In reviewing and using data in an ethical manner boards will also need to focus on matters such as data governance. For example, generative AI will use data of some kind, but not necessarily personal data covered by GDPR and other regulations. It will be important to understand precisely what kind of data these systems are using to ensure that legal and regulatory rules are complied with.
It may also be advisable to prioritise use cases for AI. For example, some use cases of AI in HR can present legal and regulatory issues due to the nature of the personal data involved and what the AI could do with it.
The AI Act will categorise AI systems into prohibited, high risk, moderate, and minimal risk activities. Prohibited activities include the use of subliminal techniques to influence behaviour, social screening, and the exploitation of vulnerabilities on certain grounds.