EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
Our Consulting approach to the adoption of AI and intelligent automation is human-centered, pragmatic, outcomes-focused and ethical.
Read more
5 Steps to Ensure Responsible AI Use
As your organization creates an AI strategy, here are five steps you can take right now to help ensure responsible use:
1. Take a "responsible AI by design" approach to mitigate risks. Weave responsible AI principles into your overall framework, integrating clear boundaries and priorities into your development lifecycle. "For example, create technical controls for development teams, conduct impact assessments and do regular fairness testing," says Vanvaria. "Orchestrate all these tasks with an operating model that works for your organization, with the right roles coming together at the right times."
2. Establish a responsible AI framework grounded on industry standards. Develop a deep understanding of existing and emerging industry standards for AI. "Make sure your AI framework takes different AI usage patterns into account," says Vanvaria. "For example, using enterprise ChatGPT versus developing GenAI internally are different types of AI use.
3. Invest in technology capabilities for continuous monitoring. Set up systems that will monitor your AI models and data sets constantly, checking for inconsistencies, bias, and anomalies that could indicate a cybersecurity threat. "Once your models are operationalized, how are you going to have controls that will ensure that model and data drifts are not happening?" says Kapoor. To offset risks, build technical guardrails that highlight problems and train your algorithms to minimize bad output. Some examples include ModelOps platforms, automated testing, and other monitoring solutions.
4. Work to ensure ongoing transparency and accountability. At every level, keep the lines of communication open to help ensure trust in AI systems. "Inform users that they may be interacting with AI systems, explain how decisions are being made by the AI system and leverage confidence scores and human-in-the-loop to evaluate AI system decision-making," says Vanvaria.
5. Create a rigorous training program anchored in real-world scenarios. Build a culture of awareness in your organization, with AI training sessions that consider real scenarios of what could go wrong—and how to mitigate those risks. "The more hands-on you can make the training, the less anxiety employees will have," says Kapoor. "And the more AI tools you can give them access to, the more they will know what to expect and how to add value to the organization."
The Future of AI Governance
As businesses continue to integrate AI at every level, successful governance will depend on making sure legal, compliance risk, IT, and business leaders have a seat at the table when making decisions. "Because of the enhanced risks of AI, they need to act in collaboration to help ensure that every angle is understood and addressed," says Kapoor. Many enterprises and large corporations are adopting a hub-and-spoke model for AI use across sites and branch offices. "Corporations need to have some kind of central governance to make sure all these pieces of the puzzle are fitting together well," says Kapoor.
While it might seem ironic, AI itself could be a helpful tool for AI governance. Algorithms can be used to test each other for bias and errors, and with an exponential rise in AI-related cybercrime, organizations may be wise to use AI-powered cybersecurity tools to detect malicious intent. Still, keeping a human in the loop will remain a crucial component of any responsible AI framework. "It’s important to keep human oversight front and center," says Vanvaria. "It’s part of maintaining transparency, which is a key component for building trust in AI systems."