5 minute read 3 Oct 2024

    

How CFOs can harness the transformative power of GenAI

How organisations can embrace responsible AI

Authors
Eoin O'Reilly

EY Ireland Partner, Head of AI & Data

Passionate about innovation, data and AI.

Ivan O'Brien

EY Ireland Consulting Partner and Head of Risk

Involved in risk and control matters. Reviews information security programmes and projects. ​

5 minute read 3 Oct 2024

Responsible and ethical AI frameworks are key to meeting the challenge of a dynamic regulatory environment.

In brief
  • The EU and other regulators around the world are putting new rules in place to govern the safe use of AI.
  • Organisations need to develop a culture of safe and responsible AI use with humans at the centre.
  • Governance frameworks and risk management processes should enable innovation whilst also ensuring regulatory compliance.

Organisations worldwide are grappling with the dual objectives of maximising the benefits of artificial intelligence (AI) and generative AI (GenAI) technologies, while also ensuring their use is safe, ethical, and fully compliant with an ever-changing regulatory landscape.

The use cases for AI and GenAI and the risks they present are dependent on the industry sector and the nature of the organisation involved. As organisations struggle to come to grips with the technology, regulators across the world are setting out new rules for its use along with severe penalties for infringements.

High-risk AI systems to need human oversight

In Europe, the EU AI Act sets out to establish uniform rules for AI use and aims to ensure that AI systems deployed in the EU are safe, and respect fundamental rights and EU values. All organisations, regardless of size or sector, have been required to comply with its provisions since it came into force on 1 August 2024. However, the majority of the rules introduced by the Act will only start applying from 2 August 2026 while the rules for so-called General-Purpose AI models will apply after 12 months.

 The Act takes the approach of applying escalating obligations for three broad risk categories of AI – unacceptable, high, and limited or low risk.

Unacceptable risk AI is prohibited and includes systems that target vulnerable people or groups, that can materially distort a person’s behaviour, and those can lead to unfair treatment of people. The high-risk category includes systems that can harm the health and safety or the fundamental rights of people as a result of their intended use. Examples include the use of AI in energy generation and water supply systems, education, and credit decision making by financial institutions. Low-risk AI systems include chatbots and spam filters.

The prohibition of unacceptable AI systems will come into force in February 2025. Other aspects of the legislation will come into force progressively over the following months and years.

Obligations will differ depending on the nature of the AI systems in use. At the top level, organisations will need to ensure human oversight of high-risk AI systems. They will also need to carry out a fundamental rights impact assessment before deploying such systems. Organisations will also need to ensure that AI systems are designed in a way that makes them understandable and that their decisions are explainable and have a risk management system in place for AI deployments.

This is by no means an exhaustive list of the new obligations contained in the legislation and organisations will need to familiarise themselves with the new requirements as soon as possible.

In addition, the Act sets out penalties for non-compliance with fines of up to a €35 million or 7% of annual turnover, whichever is greater, for prohibited AI practices while other breaches can attract fines of up to €15 million or 3% of annual turnover, whichever is greater.

With the National Institute of Standards and Technology (NIST) in the US developing its guidelines and regulations for secure, trustworthy AI systems and regulators in Asia following suit, organisations face a potential regulatory minefield for companies doing business on a global scale.

One way of navigating that new and highly dynamic landscape is by embracing responsible and ethical AI standards that will not only meet but exceed the new requirements being imposed by regulators.

Many regulated organisations already adopt this approach in other areas of their operations and are now applying it to their use of AI and GenAI systems. However, other organisations with fewer regulatory obligations may find that quite challenging. But there are some key actions they can take to ensure the responsible use of AI.

Understand what AI means in your organisation

It’s important that organisations recognise the role AI plays in enhancing business processes, decision-making, and customer experiences by providing clear guidance and real-world use cases. This process of understanding what AI means in the organisation should include an inventory of AI tools currently in use and those that the organisation intends to buy or build in future. It should also address how the organisation will source data ethically and how it will be used. This will form the basis for what is monitored and reported on.

Develop an AI strategy that delivers value

Achieving responsible AI begins with embedding a value-driven and sustainable AI strategy within the organisation’s culture, with humans at the centre. Staff at all levels must be educated continuously on the importance of responsible and ethical AI practices to ensure alignment with evolving AI standards.

Create a robust governance structure

AI governance frameworks should embrace responsible AI principles including transparency, accountability, fairness, explainability, reliability and privacy. 

EY Responsible AI Framework

EY Responsible AI Framework

Robust controls should be established and implemented while processes to manage risks and streamline reporting to regulators and other stakeholders should be put in place. These processes should be underpinned by a robust monitoring framework to detect and mitigate risks as AI systems evolve over time. This will help maintain adherence to established risk thresholds and prevent unintended outcomes.

Innovate within compliance framework

Among the key challenges in deploying AI is the need to ensure regulatory compliance while also achieving a commercial return on investment. This requires robust quality assurance processes which will ensure that the AI outputs are reliable while the overall governance framework will ensure the ethical use of the technology. However, these processes and frameworks should be constructed in ways that support safe innovation rather than stifle it. Furthermore, the increased use of GenAI will likely mean that established AI governance processes and frameworks will need to be updated.

Summary

With the rapid growth in AI and GenAI use, organisations need to embrace responsible and ethical AI standards if they are to keep pace with both advances in the technology and the evolving regulatory landscape. Adherence to responsible AI practices and standards will ensure transparency, fairness, and human centricity. This approach will help organisations tap into AI’s enormous commercial benefits while complying with existing and new regulations.

EY.ai - a unifying platform

A platform that unifies human capabilities and artificial intelligence to help you drive AI-enabled business transformations.

Discover more

EY.ai - a unifying platform

About this article

Authors
Eoin O'Reilly

EY Ireland Partner, Head of AI & Data

Passionate about innovation, data and AI.

Ivan O'Brien

EY Ireland Consulting Partner and Head of Risk

Involved in risk and control matters. Reviews information security programmes and projects. ​