Organisations worldwide are grappling with the dual objectives of maximising the benefits of artificial intelligence (AI) and generative AI (GenAI) technologies, while also ensuring their use is safe, ethical, and fully compliant with an ever-changing regulatory landscape.
The use cases for AI and GenAI and the risks they present are dependent on the industry sector and the nature of the organisation involved. As organisations struggle to come to grips with the technology, regulators across the world are setting out new rules for its use along with severe penalties for infringements.
High-risk AI systems to need human oversight
In Europe, the EU AI Act sets out to establish uniform rules for AI use and aims to ensure that AI systems deployed in the EU are safe, and respect fundamental rights and EU values. All organisations, regardless of size or sector, have been required to comply with its provisions since it came into force on 1 August 2024. However, the majority of the rules introduced by the Act will only start applying from 2 August 2026 while the rules for so-called General-Purpose AI models will apply after 12 months.
The Act takes the approach of applying escalating obligations for three broad risk categories of AI – unacceptable, high, and limited or low risk.
Unacceptable risk AI is prohibited and includes systems that target vulnerable people or groups, that can materially distort a person’s behaviour, and those can lead to unfair treatment of people. The high-risk category includes systems that can harm the health and safety or the fundamental rights of people as a result of their intended use. Examples include the use of AI in energy generation and water supply systems, education, and credit decision making by financial institutions. Low-risk AI systems include chatbots and spam filters.
The prohibition of unacceptable AI systems will come into force in February 2025. Other aspects of the legislation will come into force progressively over the following months and years.
Obligations will differ depending on the nature of the AI systems in use. At the top level, organisations will need to ensure human oversight of high-risk AI systems. They will also need to carry out a fundamental rights impact assessment before deploying such systems. Organisations will also need to ensure that AI systems are designed in a way that makes them understandable and that their decisions are explainable and have a risk management system in place for AI deployments.
This is by no means an exhaustive list of the new obligations contained in the legislation and organisations will need to familiarise themselves with the new requirements as soon as possible.
In addition, the Act sets out penalties for non-compliance with fines of up to a €35 million or 7% of annual turnover, whichever is greater, for prohibited AI practices while other breaches can attract fines of up to €15 million or 3% of annual turnover, whichever is greater.
With the National Institute of Standards and Technology (NIST) in the US developing its guidelines and regulations for secure, trustworthy AI systems and regulators in Asia following suit, organisations face a potential regulatory minefield for companies doing business on a global scale.