EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
EY Center for Board Matters
-
EY Center for Board Matters. We support board members in their oversight role by helping them address complex boardroom issues. Find out more.
Read more
Some policymakers are focused on the need for consumers to understand how and why AI technologies work, to help promote acceptance of the technologies and create trust in the results AI produces.
In its Four Principles of Explainable Artificial Intelligence report⁷, NIST identifies key qualities of an explainable AI system: “We propose that explainable AI systems deliver accompanying evidence or reasons for outcomes and processes; provide explanations that are understandable to individual users; provide explanations that correctly reflect the system’s process for generating the output; and that a system only operates under conditions for which it was designed and when it reaches sufficient confidence in its output.”
These factors are aimed at addressing the so-called “black box problem”: Consumers might understand what data is inputted into an AI system and see the result it produces, but they don’t understand how that result is reached.
Transparency is also part of the policymaking debate as being critical to building trust. AI typically works behind the scenes, which means consumers often are unaware that they are engaging with an AI system that is making recommendations, calculations and decisions based on an algorithm. To address transparency concerns, some policymakers have called for new rules requiring disclosure to consumers when they are communicating with AI software so they can make an informed decision about the use of the technology.