Mott MacDonald is an engineering, development and management consultancy headquartered in the UK, with more than 20,000 people in over 50 countries. It plans, designs, delivers and maintains the transport, energy, water, buildings and wider infrastructure that is integral to people’s daily lives. Mott MacDonald (“the Group”) uses its expertise to overcome complex challenges, for the benefit of the clients and communities they serve, with digital and technical excellence – including AI – at the core of its delivery.
As one of the largest wholly employee-owned firms of its kind, Mott MacDonald is a values-led organization operating in a highly regulated domain where health, safety and wellbeing are paramount. They are committed to the responsible use of AI, making sure its deployment is ethical, transparent and sustainable. By prioritizing ethical AI practices and promoting AI transparency, they strive to build confidence and accountability in their AI systems.
Mott MacDonald recognized the importance of being able to reliably identify – and respond to AI risks – to deliver on their purpose, safeguard their reputation and ultimately help clients harness the power of leading technology in the delivery of the world’s most complex infrastructure projects.
The Group approached EY teams to build on this commitment to responsible AI and to support compliance with emerging AI regulations, including the landmark EU Artificial Intelligence Act.1 The Act is designed to protect EU citizens – ensuring that the AI that impacts them is lawful, ethical and robust. Similar to General Data Protection Regulation (GDPR), it’s extra-territorial and carries substantial fines – up to 7% of a company's global annual revenue or €35 million, whichever is greater.
This marks the transition of responsible AI from an “ethical opt-in” to mandatory compliance – highlighting Mott MacDonald’s foresight in proactively preparing for AI regulation.
EY teams advocate a balanced approach to meeting the challenge of AI governance. Innovation without effective safeguards can pose reputational, legal and operational risks for organizations. However, over-governing AI can stifle innovation and negatively impact client delivery. Over-governance of technology, particularly differentiating technologies such as AI, can diminish competitiveness in the market.
Mott MacDonald engaged the EY UK Responsible AI team to help navigate these challenges – working collaboratively to safely and responsibly innovate with AI at pace and scale.
Catriona Campbell, EY UK&I Client Technology and Innovation Officer says, “We believe that responsible AI is not just about compliance, but about creating value. Our collaboration with Mott MacDonald exemplifies this philosophy and underscores our commitment to helping clients navigate the complexities of AI governance while driving impactful outcomes."
As an alternative to imposing a blanket AI governance policy across all Mott MacDonald business lines, EY teams consulted on an informed, flexible and risk-based approach. They recognized that governance requirements differ for each AI system/client offer, depending on the level of risk they pose. Delivering bespoke protocols aligned to overarching principles and aligned to the Group’s risk appetite helps to deliver a responsible and agile approach to AI innovation.