Man taps tablet screen with digital pen on green lit bridge

How integrity-first AI use today builds confidence for tomorrow

The EY Global Integrity Report 2024 highlights the importance of an AI use plan that both harnesses its potential and mitigates the risks.

This article is part of the EY Global Integrity Report 2024.


In brief

  • Fifty-four percent of organizations say they are using or plan to adopt AI in the next two years. Of those, only 40% have measures in place to manage its use.
  • AI can help leaders develop new insights and empower better decision-making. But they need to balance this against AI’s potentially costly risk.
  • Organizations need an integrity-first approach for using AI that addresses the evolving regulatory landscape and the risks of “AI wishing” or “AI washing.”

Artificial intelligence (AI) has the power to fundamentally transform almost every aspect of how organizations operate. Specifically, there are many significant successes around the use of AI impacting daily business activities. Yet, for all its potential, the risks associated with AI use are rapidly evolving. We’ve seen instances where AI has been used to adversely influence business processes, impersonate individuals and entities, and lead to biased decision-making.

For organizations to fully harness the potential of AI and manage the associated risks, they will need to incorporate an integrity agenda into their AI use strategy. It should support positive behaviors around effective governance, robust compliance controls, insights from qualitative and quantitative data, and an integrity-first corporate culture.

According to the 2024 Edelman Trust Barometer, people trust businesses more than nongovernmental organizations (NGOs) or government (59%, 54% and 50% respectively) to make sure innovations are safe, understood, beneficial and accessible. Even so, a 59% confidence level in businesses leaves considerable room for improvement. Every entity, private and public, needs to do more to build confidence in the ethical use of AI.

As AI adoption accelerates, organizations grapple with how to use it

Organizations continue to adopt AI at a rapid pace. The EY Global Integrity Report 2024 findings suggest organizations are grappling with AI ideation, development and deployment to transform their business. Across the organization, slightly more than a quarter (29%) say they’re using AI-enabled tools in their business and operations. Another quarter (25%) say they plan to do so in the next two years.

The use of AI in organizations
of global respondents say that they’re either using AI-enabled tools or plan to do so in the next two years.

Within businesses, IT is the earliest adopter, with 42% using AI-enabled tools. Compliance (31%) and finance functions (33%) are also taking bold steps. Internal audit (23%) and legal (14%), meanwhile, lag in active use of AI, but many have plans to catch up in the next two years.

Emerging markets are ahead in managing and safeguarding the use of AI

Whether organizations are in the planning stages or already actively using AI, roughly four in 10 have put measures in place to manage its deployment and use. Interestingly, emerging markets appear more mature in their understanding of, and responsibilities toward, AI. Further, 51% of executives in emerging markets say they’ve received training or guidance from their organization about the permitted uses or risks of AI, vs. 35% of executives in developed markets. Rates in the Middle East, India and North Africa (60%), Far East Asia (59%) and South America (54%) are significantly higher than in Western Europe (35%), North America (32%) and Oceania (28%).

This makes sense, given that much of the talent base and technical skill sets sit predominantly in emerging markets, providing added flexibility and speed of adoption. Further, emerging markets have a robust startup mentality and are adept at capitalizing on emerging technology concepts. Conversely, larger organizations in mature markets often have to contend with a structure that can slow adoption.

Emerging markets are leading the way in providing guidance for the use of AI
of respondents in emerging markets say they’ve received training or guidance from their organization about the permitted uses or risks of AI, vs. 35% of executives in developed markets.

Organizations need an encompassing plan for the ethical use of AI and the capabilities to enact it

Given the growing expectation among regulators to move from manual corporate reporting, such as spreadsheets and email-based processes, to dynamic, real-time or near real-time monitoring and reporting,1 organizations will have to move faster than anticipated in adopting AI tools. The volume of data being generated, combined with the need for real-time information to drive business strategy and increasingly complex regulatory requirements, means that AI-enabled tools will soon become something organizations need to have now rather than something nice to have in the future. At the same time, with regulatory enforcement and litigation related to AI on the rise, they’ll need to execute a plan to manage the risks associated with using AI.

Organizations need to build encompassing governance frameworks for the ethical use of AI — according to the EY Global Integrity Report 2024, this is something they admit is challenging, given the pace at which AI is evolving. They will need to work quickly to develop policies, procedures and supportive technology, as well as the proper skills among their people, to embed integrity into their AI agenda.

1

Chapter 1

Legal and compliance functions see AI’s risks and potential

Legal and compliance respondents are excited about what AI can do for them but struggle to assure the integrity of AI adoption.

Legal and compliance executives see innumerable opportunities for use cases and are excited about the potential AI can bring to their functions. However, the overall low adoption of AI within legal and internal audit suggests that the organization’s second and third lines of defense are not staying current with the use of AI in the rest of the organization. We observed this same situation with the rise of big data and robotic process automation (RPA) in prior years; legal, compliance and internal audit functions were catching up to the organization’s use of data analytics.

Where legal and compliance functions see potential for using AI

Legal and compliance respondents in the EY Global Integrity Report 2024 cite continuous improvement, ongoing monitoring and risk assessments as the top routine compliance activities best suited to the use of AI. Further, they say that AI’s greatest impact in compliance is centered around advanced data gathering, manipulation and risk analysis in correlating data sets (40%), active monitoring and altering (37%), and risk-scoring activities (34%).

AI’s impact on the compliance function
of legal and compliance respondents say that AI’s greatest impact in compliance is around advanced data gathering, manipulation and risk analysis in correlating data sets.

EY teams have seen many successful uses of AI within the compliance and legal functions through their experience in supporting clients around the globe. For example, generative AI (GenAI) tools can quickly research and summarize large masses of information, draft contracts, conduct first-pass reviews and perform certain electronic discovery procedures, greatly increasing accuracy and efficiency in executing routine tasks. AI can also help compliance leaders develop new insights, empowering better decision-making. Specific use cases for AI within compliance and legal functions include:

  • Monitoring regulatory changes and analyzing internal data to identify potential compliance gaps
  • Streamlining the due diligence process by automating third-party background checks and financial analyses to detect red flags
  • Improving risk assessment by analyzing financial transactions, communications and other data to detect patterns and anomalies
  • Generating real-time alerts of red flag activity and triaging instances of potential misconduct
  • Greatly reducing the cost and time to mine large data sets by using predictive models to perform email and document reviews in response to regulatory inquiries, subpoenas and litigation
  • Automatically identifying and extracting or redacting private and privileged information across whole data sets
  • Providing on-demand answers to employee compliance inquiries, referencing corporate policies and giving “how to” instructions through AI chatbots

Despite the exciting potential for AI, legal and compliance executives are wary of key risks that may be holding them back from fully deploying AI within their functions. The top two challenges they cite include inconsistent or missing data to feed into AI models and a lack of in-house expertise.

Challenges in safeguarding the integrity of AI adoption across the organization can lead to “AI wishing” or “AI washing”

With AI rising to the board level, and with the potential for it to become as visible as, for example, cyber for organizations, responsibility for the integrity of and confidence in an organization’s use of AI falls largely on the shoulders of legal and compliance, with the support and collaboration of cross-functional teams. They are not only responsible for managing their own AI integrity risks but also testing compliance and safeguarding the quality of the organization’s integrity standards as AI is implemented across the enterprise — activities that should be embedded into a broader, enforceable operational process within the AI governance framework.

Yet, if legal and compliance are struggling to address the integrity risks of using AI within their own function, it suggests that they are challenged to ensure that AI-enabled tools across the organization are being used according to the organization’s internal AI governance framework and adhering to jurisdictional regulations or legal requirements.

These challenges can lead to AI wishing — where management overstates its use of AI because it hopes or believes, but cannot verify, the way the organization is using AI. There is also the more sinister risk of AI washing, where the organization deliberately makes inaccurate representations of how it’s using AI. Whether inadvertent or intentional, AI wishing and AI washing can lead to litigation. Already, a raft of new claims is being made. For example, in the US, the Securities and Exchange Commission (SEC) recently charged two investment advisors for making false and misleading statements about their reported use of AI. Both firms agreed to settle, paying US$400,000 in total civil penalties.

While it’s tempting to lean into and ride the wave of the hype around AI’s potential, a strong culture of integrity ensures that breaches of trust relating to the organization’s use of AI are the exception rather than the norm. This is increasingly important, given the rise in enforcement action against such breaches.

The accelerating pace of AI evolution is pushing AI regulation to the top of the agenda for policymakers

In the EU, some member states are looking to increase the use of facial recognition among their police forces. However, the European Parliament recently adopted tighter restrictions as part of the Artificial Intelligence Act.2  This act, which came into force on 1 August 2024, is the most developed AI regulation globally and will have extraterritorial effect and steep fines, making it relevant for all organizations doing business in or with European countries. China, which was one of the first countries to implement AI regulations, is currently expanding its various regulations and policies applicable to specific AI uses. China has also adopted United Nation's Educational, Scientific, and Cultural Organization’s (UNESCO) recommendations on the ethics of AI and is a party to the Organization for Economic Co-operation and Development’s (OECD) AI Principles.3  In India, the government is asking technology companies to get explicit permission before publicly launching AI tools and has warned companies against using AI products that could generate responses that “threaten the integrity of the electoral process.” This represents a walk-back of its stated position in 2023 of taking a hands-off approach to AI.4  The US, meanwhile, is not likely to pass new federal legislation on AI in the near future, but regulators such as the Federal Trade Commission (FTC) have responded to public concerns about the impact of GenAI by opening expansive investigations into some AI platforms.5  There is also much US state-level and locally specific legislation in force or under consideration.


Because AI is evolving rapidly in ways that are not always predictable, regulators are struggling to keep pace. This puts the onus on companies to adopt stringent checks and balances that go beyond compliance. Integrity culture can help organizations get there. Specifically, companies can, and are, going beyond compliance by taking actions such as creating responsible AI taskforces that include a cross-section of employees, which look at issues such as the ethical use of AI. Organizations are also creating policies with related processes and controls that address these issues.

This can be challenging for organizations, given that skill sets need to evolve alongside these new technologies. As organizations drive a more AI-focused agenda, there is an ongoing trend for these skill sets to shift from traditionally back-office functions related to technology toward a more prominent role within business functions and across the wider enterprise. Organizations need to be able to bridge the gap between technology and the business function use case. Examples may include having an AI technology professional working alongside the legal function to share knowledge and understanding in digestible ways about the technical benefits and the key risks in specific use cases or processes.

2

Chapter 2

Six ways leaders can take an integrity-first approach to AI use

Organizations must have a comprehensive plan and a systematic approach for the ethical and compliant use of AI.

Given AI’s significant potential to fundamentally transform the business landscape, organizations must have a comprehensive plan and implement a systematic approach for the ethical and compliant use of AI. Here are six ways organizations can take an integrity-first approach to using AI:

1. Assess the AI use strategy

Whether the organization has already implemented AI or plans to do so in the near term, it’s important to understand its current maturity in managing the use of AI. An AI maturity assessment can help to identify critical gaps. For example, when a global pharmaceutical company conducted an AI use compliance assessment, it learned that one of its largest gaps was the absence of a consistent AI governance framework.

2. Develop a formal AI use policy and the means to implement it

Governance is the anchor to enable secure, sustainable, responsible and transparent use of AI. While creating an AI governance framework can be useful, these are often voluntary or inconsistently applied. A more constructive approach is to develop a formal — and enforceable — AI use policy, accompanied by the appropriate means to implement and monitor it. The policy should give specific attention to defining ethical AI principles for the organization; establishing guidelines to respect people’s rights, safety and privacy; ensuring the fairness, accuracy and reliability of AI output; and protecting the security of underlying data and models.

3. Assemble a cross-functional team

For an AI use policy to be most effective, multiple stakeholders across the organization (IT, privacy and information security, compliance, legal, innovation, finance and internal audit functions) need to work together to assess AI use cases, associated risks and appropriate guardrails. Organizations should establish a governance committee to ensure various aspects of AI risks are owned by the relevant teams and have implications for different use cases. Outside the governance committee, the cross-functional team can monitor the consistent application of the governance and risk management approach across the organization. Each team plays a different role in the AI lifecycle and use management. It is only by working together that the relevant AI risks can be managed effectively from end to end.

4. Build a regulatory and litigation response plan for AI

A regulatory and litigation response plan for AI is the next stage of governance planning. With legal and regulatory environments becoming more challenging, especially pertaining to AI, organizations should be prepared with a response plan to manage such crisis events. This is especially important in the event of an AI-wishing or AI-washing claim against your organization. Should an issue arise, the organization’s use of AI will be heavily scrutinized. Organizations need to know who needs to be involved, where the data lives and who is responsible for it. They’ll have to go through a full response program to collect the relevant artifacts and demonstrate from a technical perspective how the organization is using AI. It’s an expensive process that involves hiring lawyers, reviewing models and records, and being able to present all these records to the regulator. It’s important to recognize that this isn’t a traditional subpoena request. In traditional subpoenas, organizations may need to be able to produce emails. In AI litigation, they need to be able to produce algorithms.

5. Optimize data governance and processes

In the EY Global Integrity Report 2024, executives cited inconsistent or incomplete data feeds into AI models as their number one challenge in deploying AI within the compliance function. For legal and compliance professionals — and, arguably, the workforce at large — to trust the data, organizations need to have a clear and complete understanding of their data. This should include data mapping and lineage to know where it comes from, as well as its level of quality and limitations.

6. Build an inventory of all AI tools in use

Organizations should have, or build, an inventory of all AI and machine learning (ML) tools in use. As the organization’s AI capabilities mature, it can focus on building a scalable, flexible, secure infrastructure that can safely manage a portfolio of AI algorithms.

The speed at which AI is advancing is only accelerating. Given all of the concepts and components that organizations must consider to not only implement AI but also instill confidence in it, organizations must develop a cohesive integrity-first approach to AI. Ad hoc efforts to chase risks and challenges after the fact will not suffice.

A robust AI use strategy set upon a strong governance framework with clearly defined policies and procedures, controls that align to the governance protocols, data governance and processes, and a cross-functional team that can not only drive the deployment of AI but also champion a culture of integrity around AI will all contribute to an integrity-first AI agenda that both harnesses the full potential of AI and mitigates the risk.


Summary

The EY Global Integrity Report 2024 highlights that, as the speed at which advances in AI accelerate, leaders admit they’re struggling to keep pace with where and how AI is being implemented within the organization. It is therefore critical that leaders establish a cohesive integrity-first AI use strategy that sits atop a strong governance framework to enhance AI’s potential, mitigate the risks and build confidence in the ethical use of AI.

Related articles

How can trust survive without integrity?

The EY Global Integrity Report 2024 reveals that rapid change and economic uncertainty make it harder for companies to act with integrity. Read our findings.

How to balance opportunity and risk in adopting disruptive technologies

Adopting disruptive technologies is a critical challenge for organizations seeking to drive stronger compliance strategy while embracing innovation. Learn more.

    About this article