Woman touching computer screen

How do you plan to create trust by design in artificial intelligence?

Organizations should consider fundamental attributes and ongoing assurance to meet their shared responsibility for trusted AI.


In brief

  • Companies have a shared responsibility to ensure that AI meets technical, ethical and social criteria from development to operation.
  • Customers need to be able to trust the AI used by businesses, while regulators are making trust part of the law.
  • The EY Trusted AI Framework proposes seven attributes to address the unique risks of AI and build trust.

It’s easy to become weary of headlines that tell us the latest tech breakthrough will “change everything.” But generative AI and AI-driven large language models (LLMs) are set to live up to the hype, creating a new form of intelligence that may even surpass the creation of the PC in terms of impact. But how can we ensure that this new form of intelligence can be trusted? 

Top barriers
Organizations suffering unclear AI governance and ethical frameworks

As AI accelerates, its ability to transform performance and productivity could translate into huge value in varied sectors, from banking and healthcare to consumer goods. However, for many businesses, the buzz around AI is yet to yield genuine breakthroughs. While organizations have adopted AI in piecemeal form or launched pilot projects, these important first steps are in reality a response to uncertainty. Now is the time to move from siloed projects to a cohesive and comprehensive strategic roadmap for transformation.

The challenge will be to define the organization’s AI strategy, the AI governance and to implement frameworks that really absorb and integrate this transformative change in as controlled and secure a way as possible. A cornerstone of this journey will be to maintain the organizations level of digital trust; getting this wrong could result in a loss of customers, market share and brand value. Conversely, those that get it right will be able to differentiate themselves from their competitors in the digital economy as they look to disrupt their business and enter new markets. But how can the organizations simultaneously transform its organization to such a large extent while maintaining the digital trust level?
 

Trusted AI framework

With the risks and impact of AI spanning technical, ethical and social domains, a new framework for identifying, measuring and responding to the risks of AI is needed to build and maintain digital trust. The EY Trusted AI Framework with seven attributes is built on the solid foundation of existing governance and control structures, but also introduces new mechanisms to address the unique risks of AI.

EY how to trust ai solutions graph

As AI solutions will also be heavily used from external third parties, by design the shared responsibilities and the implication on the Responsible AI framework must be considered right from the beginning. Using external AI provider will require the organization to adequately identify the related risks and change the approach on how AI is governed, and digital trust maintained.

Assurance essential

With great opportunity comes great change, comes great risk, comes great responsibility. Maintaining digital trust throughout development, implementation and operation is essential for the success and speed of your adoption. We believe there are three fundamental assurance actions that leaders need to incorporate now:

If AI delivers on its potential, it could be every bit as transformative as the personal computer has been over the last five decades, supercharging productivity, unleashing innovation and spawning new business models — while disrupting those that don’t adapt quickly enough. The uncertainty and resource constraints confronting many companies are real, but there’s no need to let them become an excuse for inaction and delay. 

Summary

Navigating trust in AI involves addressing the risks associated with its rapid growth. Regulatory developments like the EU AI Act play an important role in building trust but organizations must also take a proactive approach and acknowledge AI as a shared responsibility.

Acknowledgements

We kindly thank Emre Beyazgül and Gian Luca Kaiser for their valuable contribution to this article.

About this article

Related articles

How to navigate global trends in Artificial Intelligence regulation

Learn why the AI regulatory approach of eight global jurisdictions have a vital role to play in the development of rules for the use of AI.

Shaping the future of life sciences: AI‘s regulatory, risk and technology dimensions

EY’s whitepaper helps organizations navigate the complexities of AI to optimize performance and remain compliant.