Case Study

How Mott MacDonald is building confidence through responsible AI

EY teams supported a leading consultancy business to transform their responsible AI governance – and prepare them for the EU AI legislation.

The better the question

How can your organization have confidence in the opportunities AI brings?

To realize the full potential of artificial intelligence (AI), organizations need to focus on governance and responsibility.

1

Mott MacDonald is an engineering, development and management consultancy headquartered in the UK, with more than 20,000 people in over 50 countries. It plans, designs, delivers and maintains the transport, energy, water, buildings and wider infrastructure that is integral to people’s daily lives. Mott MacDonald (“the Group”) uses its expertise to overcome complex challenges, for the benefit of the clients and communities they serve, with digital and technical excellence – including AI – at the core of its delivery.

As one of the largest wholly employee-owned firms of its kind, Mott MacDonald is a values-led organization operating in a highly regulated domain where health, safety and wellbeing are paramount. They are committed to the responsible use of AI, making sure its deployment is ethical, transparent and sustainable. By prioritizing ethical AI practices and promoting AI transparency, they strive to build confidence and accountability in their AI systems.

Mott MacDonald recognized the importance of being able to reliably identify – and respond to AI risks – to deliver on their purpose, safeguard their reputation and ultimately help clients harness the power of leading technology in the delivery of the world’s most complex infrastructure projects.

The Group approached EY teams to build on this commitment to responsible AI and to support compliance with emerging AI regulations, including the landmark EU Artificial Intelligence Act.1 The Act is designed to protect EU citizens – ensuring that the AI that impacts them is lawful, ethical and robust. Similar to General Data Protection Regulation (GDPR), it’s extra-territorial and carries substantial fines – up to 7% of a company's global annual revenue or €35 million, whichever is greater.

This marks the transition of responsible AI from an “ethical opt-in” to mandatory compliance – highlighting Mott MacDonald’s foresight in proactively preparing for AI regulation.

EY teams advocate a balanced approach to meeting the challenge of AI governance. Innovation without effective safeguards can pose reputational, legal and operational risks for organizations. However, over-governing AI can stifle innovation and negatively impact client delivery. Over-governance of technology, particularly differentiating technologies such as AI, can diminish competitiveness in the market.

Mott MacDonald engaged the EY UK Responsible AI team to help navigate these challenges – working collaboratively to safely and responsibly innovate with AI at pace and scale.

Catriona Campbell, EY UK&I Client Technology and Innovation Officer says, “We believe that responsible AI is not just about compliance, but about creating value. Our collaboration with Mott MacDonald exemplifies this philosophy and underscores our commitment to helping clients navigate the complexities of AI governance while driving impactful outcomes."

As an alternative to imposing a blanket AI governance policy across all Mott MacDonald business lines, EY teams consulted on an informed, flexible and risk-based approach. They recognized that governance requirements differ for each AI system/client offer, depending on the level of risk they pose. Delivering bespoke protocols aligned to overarching principles and aligned to the Group’s risk appetite helps to deliver a responsible and agile approach to AI innovation.

We believe that responsible AI is not just about compliance but about creating value.

Together, the Mott MacDonald and EY teams embarked on a multi-phase journey to advance their approach to AI governance – positioning the organization as a sector leader in responsible AI.

A woman riding a mountain bike at a wind farm in Scotland

The better the answer

A solution to safeguard against risks without compromising innovation

EY teams combined technical, ethical and legal competence to develop a pragmatic AI governance framework for Mott MacDonald.

2

The initial phase focused on advising on the key components of an AI governance program. Leveraging the EY AI Governance Maturity Model and Responsible AI Risk Taxonomy, EY teams helped Mott MacDonald to transform and mature their AI governance approach. This critical step helped build stakeholder consensus across functions, including technology, data, law, procurement and risk teams – communicating the importance of AI governance and setting out next steps to achieve best practice.
 

The next phase was transitioning from advisory to design. EY worked with the teams responsible for the Group’s global AI governance initiatives to co-design a new AI governance framework and protocols:

  • Defining AI – We established an organization-wide definition of AI and developed guidance documentation tailored to different stakeholder audiences (e.g., technical versus non-technical staff). A clear articulation of what is and what is not considered AI is an essential first step in scoping which technology assets should be covered by AI governance requirements, as definitions can be subject to individual interpretation.
  • Curating an AI risk appetite – We helped to define their AI risk appetite – the level of risk that the Group is willing to tolerate in relation to their AI innovation goals. This work started by identifying how AI risk intersected with other risk areas already recognized and tracked by the organization as part of their broader risk management framework. From there, we guided Mott MacDonald on how to best integrate and enforce their AI risk appetite at scale, for example, by identifying key risk indicators across 10 aspects of Mott MacDonald’s risk framework including sustainability, talent and cybersecurity to practically measure adherence to the risk appetite over time.
  • Drafting a responsible AI policy – EY teams worked with the Group’s Head of Privacy and Data Protection to design a comprehensive responsible AI policy, defining Mott MacDonald’s organizational commitment and outlining the principles and practices around using, developing, procuring and deploying ethical AI. This policy serves as a cornerstone for AI governance, providing clear guidelines on fairness, transparency, accountability and compliance with relevant AI regulations and industry standards.
  • Designing an inventory management process – We designed an inventory management process to empower Mott MacDonald teams to seamlessly track and manage their AI systems and underlying assets, such as AI models, for example GPT-4 - an advanced AI language model - and linear regression, as well as datasets. This process helps to maintain that AI-related assets are properly documented and monitored to facilitate optimal oversight and control over AI systems.
  • Designing a bespoke risk assessment methodology – We developed a bespoke risk assessment methodology to classify AI systems based on their inherent risk, focusing on both commercial and regulatory perspectives. This methodology enables the team to systematically evaluate and categorize AI systems, making sure that higher-risk systems receive appropriate scrutiny and governance, while lower-risk AI systems can get to production faster.

Bringing teams together to shape AI governance excellence

Multi-disciplinarity is essential to establishing a robust approach to AI governance, given the cross-cutting nature of AI risks in areas ranging from cybersecurity to sustainability and beyond. Sofia Ihsan, EY Global Responsible AI UK&I Leader reflects, “Our team's position as an independent advisor played a contributing role in supporting Mott MacDonald, because we brought our extensive experience and objectivity to the table.”

The EY delivery team consisted of data scientists, digital ethicists, lawyers and IT risk practitioners. Naomi Bishop-Bunn, Mott MacDonald Group AI Governance Lead, noted, “This diversity of experience and perspectives – bridging technical, ethical and commercial domains – is what made the difference.”

This diversity of experience and perspectives – bridging technical, ethical, and commercial domains – is what made the difference.
Car light trails on the M20 motorway in Kent, UK

The better the world works

Shaping the future of AI – safely, responsibly and with confidence

Mott MacDonald has embraced a renewed commitment to responsible AI, aligning innovation with a focus on safety and engineering excellence.

3

The design and roll-out of a robust AI governance framework has positioned Mott MacDonald as an early adopter in responsible AI within their sector, enabling them to deliver tangible benefits to their clients, people and the communities they serve.

“The proactive engagement from the team at EY made a significant impact on our collaboration. Their willingness to connect beyond scheduled meetings fostered a productive relationship. This support was crucial in achieving the timely and high-quality completion of the policy documentation," says James Alexander, Mott MacDonald’s Group Head of Privacy and Data Protection.

The impact of this includes:

  1. Enhanced governance: The risk assessment methodology developed during this engagement has improved Mott MacDonald's understanding – and ranking – of AI risk. This has empowered their teams to prioritize governance, accordingly, optimizing time and resources toward AI systems that pose the most risk to their organization.
  2. Readiness: Mott MacDonald is prepared for the changes that the EU AI Act and similar legislation in other jurisdictions will bring. They are proactively building AI governance maturity in step with regulatory change and leading practices which will allow them to innovate more quickly while staying aligned with future regulations.
  3. Building client’s confidence: Mott MacDonald’s commitment to the principles of responsible AI that they have adopted is fundamental in developing confidence, building trust and driving transformational change for their clients.
  4. An empowered workforce: This project has been a catalyst for change across all the regions in which Mott MacDonald operates, ensuring that both internal teams and clients experience the benefits without encountering any obstacles.
  5. Strengthened market position: As one of the early adopters in their sector to publish a responsible AI policy,2 Mott MacDonald is not only adhering to regulation but also contributing to AI transparency and accountability. This is in line with their reputation of being innovative and forward thinking and promotes the adoption of similar practices across the industry, encouraging higher ethical standards.

“Mott MacDonald's commitment to responsible AI goes beyond compliance. By publishing their AI policy, they are ensuring adherence to regulations and keeping abreast of advancements in the field of AI,” explains Piers Clinton-Tarestad, UK&I Partner, Technology Risk, Ernst & Young LLP.

This project showcases how a commitment to responsible AI, underpinned by robust policies and frameworks, is key to harnessing the transformative power of AI, supporting compliance with emerging regulations. Mott MacDonald's journey is an indicator that the future of competitive business lies in the balance of technological innovation and integrity.

Mott MacDonald's commitment to responsible AI goes beyond compliance. By publishing their AI policy, they are ensuring adherence to regulations and keeping abreast of advancements in the field of AI.

Contact us
Like what you’ve seen? Get in touch to learn more.
You are visiting EY asean (en)
asean en