Case Study

How EY is navigating global AI compliance: The EU AI Act and beyond

Blending compliance with ethical deployment and long-term value, the global EY organization (EY) is turning AI regulation into a strategic advantage.

The better the question

How do you create protocols for something unprecedented?

As global AI regulation evolves, EY teams had to meet compliance requirements without stifling innovation.

1

As artificial intelligence (AI) technology advances, governments around the world are working to harness AI’s potential while mitigating associated risks. However, AI regulation is developing unevenly, with different regions taking distinct approaches and timelines.

This global variation reflects each region’s unique priorities and perspectives on balancing innovation with governance, resulting in a fragmented regulatory landscape that poses significant challenges for multinational organizations like EY.

One of the most comprehensive AI regulations to date is the European Union’s Artificial Intelligence Act, which came into effect on August 1, 2024. This comprehensive regulation aims to protect EU citizens while encouraging safe, innovative uses of AI. Unlike sector-specific regulations seen in regions like China and certain US states, the EU AI Act applies extraterritorially, affecting any organization operating within the EU market. Its phased compliance approach begins with initial requirements for prohibited AI systems with unacceptable risk set for February 2025 and includes further provisions extending through 2027.

The EU AI Act uses a risk-based framework that categorizes AI systems based on their potential impact. Key transparency requirements, milestones for compliance, and significant penalties for non-compliance have made it a priority for companies involved in developing, deploying or distributing AI.

EY has positioned itself at the forefront of these regulatory challenges, not only working to comply with the EU AI Act but also preparing for potential new AI regulations worldwide. The global organization of member firms recognized the need to view compliance as more than a box-ticking exercise, instead seizing this moment as a strategic opportunity to embed ethical AI practices that enhance long-term business value.

Full length view of creative team sitting in informal meeting area, listening to project manager’s update and discussing plans.

The better the answer

Embedding a culture of compliance and responsible governance

A unified vision has been central to EY’s AI compliance journey.

2

In response to evolving requirements, EY teams have implemented comprehensive changes across the organization, underpinned by significant strategic investment in AI governance.
 

“In building compliance, we’ve invested in cultivating a shared responsibility across the EY global network,” says Yvonne Zhu, Partner, Technology Risk, EY Canada. “AI governance is not just another task – it’s about empowering our people with a long-term commitment to responsible AI. Even with the most robust frameworks and models in place, our success depends on winning the hearts and minds of EY people.”
 

EY’s journey toward compliance with the EU AI Act required a cultural shift and extensive cross-functional coordination. A US$1.4 billion investment in AI transformation efforts has funded initiatives such as targeted training programs that engage staff beyond compliance requirements and foster a proactive approach to responsible AI.
 

The challenge of implementing the EU AI Act requirements across EY’s global organization necessitated centralized coordination, led by the risk management team and supported by cross-functional collaboration. Alexei Ivanov, Partner, Global Risk Management, EY LLP, notes, “Navigating the complexities of aligning perspectives across our numerous member firms, service lines, and functions was no small feat. Leadership’s role in driving a unified vision has been instrumental to our success.”
 

EY’s approach to AI governance adopts a long-term, strategic view that considers both regulatory and commercial imperatives. This approach not only allows EY teams to adapt to regulatory shifts but also helps it build resilience within its AI systems. While many AI systems may carry minimal regulatory risk, they still require strong governance from a commercial perspective, underscoring the value of integrating regulatory and business goals.
 

Collaboration with policymakers and regulators is another cornerstone of the strategy. Beyond adhering to the EU AI Act, EY teams actively engage in discussions with regulators and participate in international forums on AI policy. This engagement not only helps shape regulatory development but also positions the EY organization to adapt quickly and to guide clients on future compliance needs.

A group of diverse business professionals engage in a discussion inside a well-lit modern office cafe, exemplifying collaboration in an informal setting

The better the world works

Ethical AI principles are the driver of strategic advantage

Proactive AI governance builds trust and resilience, turning compliance into a strategic asset.

3

EY’s commitment to responsible AI, grounded in principles such as transparency, fairness and accountability, has brought significant benefits to the organization. “AI regulation will continue to evolve,” says Sofia Ihsan, EY Global Responsible AI UK&I Leader, Risk Consulting, “but our adherence to these principles remains constant.”
 

This responsible AI framework has helped EY build resilience, establish trust, and position itself as a leader in AI governance. Since 2018, the EY organization has leveraged its knowledge from highly regulated sectors to build a foundation of AI governance that brings tangible benefits through key strategies:

  1. Building on model risk management practices: EY’s background in regulated industries has helped enable the creation of a solid foundation for AI governance, allowing for the integration of regulatory compliance with business considerations and enhancing organizational resilience.

  2. Cross-functional collaboration: By engaging teams from technology, risk, legal, data governance and beyond, EY’s AI governance framework is comprehensive, balancing regulatory obligations with business goals. This integrated approach has fostered a more agile and responsive organization.

  3. Centralized AI inventory: EY’s cataloging of AI assets, paired with a risk-rating system, helps enable quick categorization and assessment of AI systems. This proactive inventory management has enhanced compliance efficiency and strengthened risk management capabilities.

  4. Commitment to responsible AI principles: Clear ethical principles aligned with EY’s broader values have increased confidence in AI systems organization-wide, supporting responsible innovation and encouraging responsible use of AI across the business.

  5. Organizational alignment on regulatory insights: Leveraging experience from EY’s public policy team, complex regulations are distilled into actionable guidance. This alignment facilitates that every function understands and supports the organization’s AI governance objectives.

By embedding ethical principles, fostering cross-functional collaboration, and staying engaged with regulators, EY’s approach to AI governance has positioned the organization not only to meet regulatory demands but also to transform AI into a strategic asset. This proactive stance elevates EY’s role as a trusted advisor, benefiting both EY teams and their clients as they navigate the evolving AI landscape.

Related topics
Contact us
Like what you’ve seen? Get in touch to learn more.