Group of people using computers and making calculations, in the style of code-based creations. Generative AI.

How internal audit can govern AI risks and promote compliance

Internal audit functions must adapt to AI complexities while also fostering innovation and learning.


In brief
  • Internal audit faces challenges in managing AI risks, requiring a proactive approach to governance.
  • Chief audit executives should develop annual AI audit plans, educate teams on risks and integrate AI governance into frameworks to promote responsible use.
  • Collaboration with executive leaders and risk committees is essential for effective AI oversight.

With the buzz around artificial intelligence (AI) reaching a fever pitch, companies have been grappling with where to place their bets on use cases, how to improve their operations and business models, and how to gain greater returns on their investments. As these capabilities mature, many leaders are also retroactively coming to terms with the need for greater governance — in which chief audit executives (CAEs), drawing on their skills in understanding and mitigating risk, must have a seat at the table.

The dynamism of AI and generative AI (GenAI) has added massive quantities of complexity across most functions in an organization, often at the urgent behest of the C-suite, while regulations globally are slowly taking shape. Meanwhile, in a recent EY survey on AI, senior executives reported that their interest in responsible AI has increased: 61% affirm the statement today vs. 53% six months ago, with about the same percentage of respondents saying that their interest will increase over the next year.

 

CAEs and internal audit functions face a tall order: to guard against risks from technologies that they likely don’t fully understand and to continue to evolve, without hamstringing functions that see AI and GenAI adoption as do-or-die imperatives. To stay ahead, internal audit must get up to speed on AI risks and controls to properly check and verify alignment and provide assurance that the use of the AI systems within the organization is responsible.

 

Proactive CAEs can develop an annual AI audit plan that may consist of multiple audits a year (instead of just a singular AI governance audit) and provide education about the emerging AI risk universe and the necessary required internal audit response. An effective strategy should offer learning opportunities for employees and should encourage the adoption of AI tools alongside an agile plan that can continuously flex to meet the business needs of a rapidly evolving AI landscape. With that in mind, EY leaders have developed this playbook as a guide for CAEs to bring the full weight of internal audit to bear on this evolving landscape of risk.

1

Chapter #1

Key factors exacerbating internal audit’s role in auditing AI

Internal audit faces stakeholder demands, evolving AI regulations and a need for skilled talent.

AI broadly refers to machines that mimic humanlike cognitive abilities. This includes GenAI, which creates content when prompted by a user, as well as nascent AI agents. Through its ease of use, GenAI has democratized AI, making the technology accessible to any user, whereas other types of AI have generally only been accessible to data scientists.

Against this background of dramatic change, with powerful tools in the hands of people who may not fully understand them, internal audit is confronting:

  • Increasing stakeholder demands for desired outcomes and risk mitigation. Institutional and activist investors — as well as consumers, employees and business partners — are asking more difficult questions around how companies are managing AI-related risks and issues.
  • Evolving global regulations focusing on companies’ use of AI. Jurisdictions and regulatory bodies around the world are developing guidance on the design, use and deployment of AI, including risk management.
  • Ad hoc and siloed approaches to managing AI risks and opportunities. AI issues span various functions within a company, and ownership of data, risks and controls may be unclear or unassigned. Integration of AI issues into existing governance and oversight models is limited, potentially resulting in unidentified gaps in risk coverage across the company.
  • Heightened demand for AI skill sets/upskilling talent. Organizations are increasing training and hiring new roles to address organizational ambitions and risk management activities, including oversight and governance of AI processes, risks and controls.
2

Chapter #2

Top considerations for AI governance

CAEs must balance AI governance with managing risks and fostering a culture of awareness.

Considering all these competing pressures, CAEs must juggle how to enable AI governance, in which a defined AI strategy informs the supporting policies, procedures and operating model. Firstly, the fast-moving nature of AI requires participation from various executive leaders and risk committees to engage and enable the lines of defense for effectively monitoring and managing program risks, safeguarding against adverse impacts while fostering innovation and operational efficiency. These stakeholders should approve an overarching framework, methodologies, and roles and responsibilities.

Along the first line of defense, operational teams within the risk-taking business units must be equipped with the tools and training to identify and manage AI risks. They should foster a culture of risk awareness and encourage proactive risk management practices. This includes owning the management of vendors that utilize AI and machine learning and performing contract reviews, for example, as well as managing data privacy considerations such as consumer notices and requests to opt out or delete information.

In the second line of defense, risk and compliance functions should define clear risk management policies and frameworks that align with the organization’s AI objectives. They should provide guidance and support to the first line in implementing risk controls and ensure continuous improvement of risk management practices. These functions would fulfill their traditional remit in conducting model testing and performance assessment for AI, for instance, as well as assessing risks and establishing controls for data security, privacy and other key heightened risks for large language models.

3

Chapter #3

Internal audit’s role in responsible AI

There are three key areas: AI governance, auditing AI performance and enhancing enterprise IQ.

As the third line of defense, internal audit has an important role to fulfill for responsible AI, just as it would for any other technology that has tremendous upside potential alongside downside risk. They are responding in three key areas:

1. Gaining a seat at the table around AI governance. However, multiple seats at several tables are likely needed — depending on whether the AI governance structure is federated or decentralized, or whether it is still formative and hasn’t coalesced around a central team.

2. Auditing the performance of the AI framework and governance, as well as AI systems and products. This may involve early-stage work in preparation for broader rollouts or doing more compliance audits looking against a regulatory framework. Also auditing use cases themselves: the AI systems or solutions being used that may drift into risks over time or be a source of risk through improper ingestion of data.

3. Raising the enterprise IQ around responsible AI. Internal audit may sponsor governance committees and find other ways to share knowledge. Internal audit serves as custodians around the design of the control environment, making recommendations as warranted, harmonizing around the taxonomy/language that’s emerging in AI and making sure it’s understood in the business. That education comes about as internal audit audits different functions, processes and activities.

4

Chapter #4

Asking the right questions to assess AI maturity

Internal audit can enhance dialogue by exploring AI strategy, governance, risk management and metrics.

The abstract power of AI, and the extent to which every function in every industry can draw upon AI-powered use cases, makes just getting started a tricky endeavor without a one-size-fits-all approach. These questions can help internal audit start or further the dialogue.

Cae playbook rai development journey

Strategy

  • Is your company’s business model prepared for accelerating AI opportunities and risk mitigation?
  • Has your organization incorporated AI into strategic decision-making and business case/benefits analysis?
  • What is your organization’s internal and external AI communication strategy?
  • Does your organization have the right external alliances and partnerships to enable achieving its AI goals?
  • How does your organization define long-term value for AI?

Governance

  • What stage is your responsible AI program currently in? Are you in the early development phase, scaling up or optimizing for efficiency while aligning with emerging stakeholder needs?
  • Does your organization have a formal committee dedicated to AI governance?
  • What is management’s role setting the AI strategy and in managing associated risks?
  • Does your organization have an AI risk policy?
  • How does your organization cascade AI throughout the three lines of defense (3LoD)?
  • Does your company clearly understand its priority AI issues across all stakeholders?

Risk management

  • Has your organization incorporated elevated AI risks into existing frameworks or taxonomies?
  • How does your organization provide program assurance for AI initiatives to ensure they deliver intended outcomes?
  • Has your organization assessed its processes and technology/tools and developed sufficient models to enable management of the AI lifecycle?
  • Does your organization embed risk and controls into the AI lifecycle?

Metrics and targets

  • What is the process to inventory, approve and track progress of AI use?
  • Has your organization defined specific metrics or targets to measure and monitor AI impacts?
  • How does your organization evaluate AI performance and create accountability for achieving targets?
  • Has your organization established reporting and communication channels for AI-related initiatives?

5

Chapter #5

Understanding AI entry points and controls for internal audit

Companies must address in-house, vendor and acquisition risks.


Internal audit must understand the vectors through which AI enters an organization and the controls suited for each of them. The chart above reflects a common lifecycle for responsible AI for solutions built in-house: for identifying and prioritizing use cases, building and testing them, and then completing the monitoring and controls to validate that they are working and have not been compromised. However, many companies are also:

  • Buying solutions outright, in which case the lifecycle somewhat resembles the chart above.
  • Encountering AI through third-party vendors — for instance, in software or tools leveraged by a vendor in the normal course of service delivery that has added AI capabilities. Third-party risk questionnaires are crucial here.
  • Making acquisitions, incorporating added due diligence to AI portfolios.

 Integrated responsible AI risk management and control environment consists of legacy control activities that need to be revisited and reassessed for readiness in functions like cyber, data privacy, third-party risk management, legal and compliance, as well as net-new control activities housed in those same functions, alongside. The model risk management controls across the AI development and procurement lifecycles.

6

Chapter #6

Next steps: assessing AI readiness

CAEs should assess AI readiness, enhance team skills, and adopt effective audit strategies for AI.

Naturally, all organizations have varying starting points and established processes that they may be able to build on. CAEs should continuously ask where their company is on the responsible AI journey: starting a program, scaling its capabilities or optimizing it?

To stay ahead of whatever comes next — whether a technology to implement within internal audit or one to monitor in the business — CAEs should be aware of the steps being undertaken as organizations build or improve upon their RAI governance and risk management operating model and capabilities. It is up to them to see the full process from planning to reassessing, incorporating their knowledge of the organization and the players involved. With a greater understanding of risk and mitigation, coupled with supercharged technical capabilities, CAEs and other executives gain the confidence to stride into the future.

Here is how to answer two of the most important client questions when it comes to assessing AI readiness.

  • Yiming Chang and Vikas Bajwa, both senior managers in the Risk Consulting practice of Ernst & Young LLP, made key contributions to this report.

Summary 

Internal audit must navigate the complexities of AI by enhancing governance and risk management. Chief audit executives should create proactive audit plans, educate teams on AI risks and collaborate with leadership to promote responsible AI use while fostering innovation and maintaining compliance with evolving regulations.

About this article

Authors

Related articles

4 pillars of a responsible AI strategy

Corporate AI adoption is surging amid genAI advancements. Establishing responsible AI policies is crucial to mitigate risks and ensure compliance.

22 Jan 2025 Samta Kapoor + 1

Four actions to pioneer responsible AI in any industry

Leaders in tech must adopt ethical AI frameworks to ensure responsible innovation. Learn more.

27 Jun 2024 Dietrich Chen
    You are visiting EY us (en)
    us en