Children playing on the rocks in river
Children playing on the rocks in river

What is intelligence without trust?

Related topics

We explore five key areas that can help business leaders build trust, even as AI continues to transform businesses.

Artificial intelligence will eventually transform many enterprises and industries. But its pace of development has been affected by a lack of trust. Today, without mature risk awareness and the right frameworks and controls, applications of AI have not evolved much beyond proofs of concept and isolated solutions.

Many companies are using AI in low-risk areas, often only for insights. If the technology is replacing human decision-making, it’s doing so under human oversight. This is appropriate while AI autonomous decision-making is in its infancy, but use cases for AI are accelerating rapidly. Over time, AI will be responsible for making more decisions, and decisions with larger impacts.

To understand the importance of trust in building successful AI systems, business leaders can start by exploring these five key questions.

Question 1: Why is AI different from other technologies in terms of trust?

Unlike other technologies, AI adapts on its own, learning through use, so the decisions it makes today may be different from those it makes tomorrow. It’s important that those changes are continuously monitored to validate that those decisions continue to be appropriate and high quality and reflect corporate values.

 

For instance, risk can be introduced when AI systems are trained using historical data. Consider how that applies to hiring decisions. Does the historical data account for biases that women and minorities have faced? Do the algorithms reproduce past mistakes even though governance processes were implemented to prevent them? Does the system prevent unfairness and comply with laws?

 

"AI can be a great tool to augment humans, but we must understand its limitations,” says Nigel Duffy, former EY Global AI Leader. “The best answer from AI may still not be appropriate based on cultural and corporate values.”

 

AI’s decisions must be aligned with corporate values, as well as broader ethical and social norms, yet humans’ ethical standards are based on many things: our families, our cultures, our religions, our communities. And development teams are often mostly composed of men who are white or Asian, instead of reflecting our diverse world. Do their personal values reflect the specific corporate values we want applied in these situations?

 

But we also need to ask ourselves whether these systems are doing what we expect them to do. AI use is spreading, yet few organizations have mature capabilities to monitor performance. There are many examples of corporations that have gone bankrupt because of poorly managed automated decision-based systems; a company can torpedo itself in a day because of a runaway automated system.

AI can be a great tool to augment humans, but we must understand its limitations. The best answer from AI may still not be appropriate based on cultural and corporate values.

Question 2: What are the risks of a failed AI system?

The risks are legal, financial and reputational. Failed AI systems can involve breaching compliance with hiring laws and other regulations. In one case, a trading firm lost US$440 million because of a software glitch — in just 45 minutes. In another, a faulty algorithm used by a major photo-sharing service tagged photos of animals in racially offensive ways.

Algorithms are fallible and can quickly be corrupted. When studying algorithms that identify what’s in images, a research team discovered that, by changing just a few pixels in a photo, the system would think that a toy turtle was actually a gun.

These aren’t arguments against using AI. But they are cautionary warnings of the importance of making sure AI is doing what was originally intended, with rigorous controls and processes.

Children building model house image

Question 3: How can business leaders mitigate those risks?

As for any new technology, determine how to manage AI by drawing on existing governance and technology management practices and then thinking about what you must supplement, modify or augment. For example, maybe there’s a need for more real-time monitoring, to gauge how AI is evolving and whether it is still operating within the expected boundaries.

Also, it’s important to think about AI from a full-systems view rather than focusing on the individual components. Remember that one AI algorithm is usually not operating by itself but perhaps with robotics capabilities, Internet of Things sensors and other algorithms. Additional risks can arise from this multitude of systems interrelating with one another.

Similarly, enterprises shouldn’t take on third-party AI applications without fully understanding the risk profile that they bring and their limitations.

“Enterprises must take the first step to manage risk by asking critical questions, such as, ‘Where is AI deployed in my enterprise, and what controls are in place today?’” says Cathy Cobey, EY Global Trusted AI Consulting Leader.

Enterprises must take the first step to manage risk by asking critical questions, such as, ‘Where is AI deployed in my enterprise, and what controls are in place today?

Question 4: Why is trust in AI so important?

If we’re going to rely on AI to make decisions and drive our cars, it requires trust. Without it, the technology won’t be adopted, or it will require so much human oversight that it will negate the efficiencies and other benefits.

Creating a framework for using AI and managing the risk may sound complicated, but it is similar to the controls, policies and processes already used for humans. We’re already evaluating human behavior against a set of norms, and as soon as people start to operate outside those norms — such as by letting bias cloud their judgment — then we react.

Companies should also understand the spectrum of risk, and match the control and governance procedures to a given risk. The risks of an AI technology are dependent upon how it’s being used. For example, imaging software that’s tagging personal photos has a much lower risk profile than imaging used to detect a pedestrian crossing the road.

Understanding the risk profile of the AI technology and its use case helps you determine the appropriate governance and control framework to overlay over that AI technology.

“Like any world-changing technology, AI comes with risks,” says Nigel. “But there are well-defined ways to manage the potential downside while capitalizing on the tremendous upside.”

Question 5: When integrating AI, how can businesses sustain trust?

Companies need to embed trust from the very beginning, centralized within the requirements, and not just as an afterthought or a concern to worry about down the road. Risk, compliance and governance functions, offering real and effective challenge and oversight of AI, will provide the foundation for truly transformational use cases for companies to exploit.

Key actions include:

  1. Determine how a decision can be made about the use cases that are acceptable or not acceptable for AI, including the use of an ethics board composed of professionals from a diverse set of disciplines
  2. Conduct an inventory of where your enterprise is using AI, and perform a risk profile for good governance
  3. Embed trust into the design from the very beginning as part of the AI system’s requirements
  4. Use the tools and techniques necessary for continuous monitoring
  5. Bring in subject-matter professionals to provide services such as independent testing and validation of the AI algorithms

“Don’t overlook AI’s potential as a risk management solution, too,” says Cathy. “For instance, it can be used to mitigate risks around cybersecurity and privacy, as well as human bias.”

Companies ready to pursue or refine an AI strategy should keep these areas of focus in mind:

  • How well is AI performing and aligning with expectations?
  • How are biases identified and addressed?
  • To what degree is there transparency for end users?
  • How resilient is your AI strategy with regard to corruption and security?
  • How easily can the AI system’s methods and decisions be understood, documented and validated?

"Trusted AI encompasses not only ethics and social responsibility but performance — trusting that it is doing what it needs to,” says Nigel. “In this well-defined context, AI has invaluable insights to share.”

Summary

Though proper implementation of AI into business models still faces a number of questions around trust, understanding and appreciating the risks will ultimately allow businesses to position themselves to capitalize on it the most.

About this article

Related articles