Businessman working on a desktop computer with data analytics tools

Addressing AI risks: Preventing bias and achieving ethical AI use

Becoming wise to GenAI’s weaknesses—and how to offset them—is a key component of a successful AI adoption strategy.


In brief
  • Addressing AI risks like bias and job displacement is crucial for ethical AI use.
  • A "responsible AI by design" approach and continuous monitoring are key steps.
  • Transparency and human oversight are essential for trust in AI systems.

As organizations around the world adopt generative AI (GenAI) as part of their processes, many employees and business leaders are rightly concerned about how to use this powerful technology responsibly. While artificial intelligence (AI) can fuel productivity and inform decision-making, it comes with potential risks and pitfalls — including bias amplification, security vulnerabilities and hallucinations.

AI Adoption: Areas of Concern

Employees and business leaders share several areas of concern regarding AI. While those concerns vary based on where organizations are in the adoption journey, a few main pain points include:

  • Job displacement: As AI-powered tools speed up productivity and drastically cut down on rote tasks such as data entry, some workers fear they will lose their jobs. "Employees are undoubtedly concerned about the disruption and job displacement," says Kapish Vanvaria, EY Americas Risk Leader. "However, we also see many who embrace this as an opportunity."
  • Talent gaps: Enterprise leaders are worried about finding enough workers who are well-versed in AI tools and methods. Filling that gap is important for scaling up and driving value with AI, according to Samta Kapoor, EY Americas AI Energy and Trusted AI Leader.
  • Bias in AI: AI bias can creep in at any phase in the lifecycle—from data collection to design to algorithmic function. "We’ve seen big companies in the news when their AI model had eliminated a segment of society primarily because of the data they were using to train the models," says Kapoor. "Companies should be very worried about reputation loss and regulations if their AI systems are biased and proper controls are not put in place."
  • Deepfakes and hallucinations: Unless they are adequately trained on quality data sets, AI algorithms can hallucinate (present false information)—and bad actors can harness its power to create misleading images, audio, and videos. "It’s important to acknowledge the limitations of current AI solutions and implement robust testing, validation, and monitoring for cyber threats," says Vanvaria.

Defining Ethical AI

Ethical or responsible AI use can be difficult to define without a solid set of objective standards. Within an organization, creating a framework with clear policies and procedures about how AI can and cannot be used is a great place to start. "It’s important to have strong governance. Bringing the right stakeholders together from the start is key," says Kapoor. Take time to define terms and ensure employees in every layer of your organization understand the framework. "For example, when your organization defines fairness, what does it mean to your data scientist? What does it mean to your CEO, who might be using some form of AI in different use cases?"

To make ethical AI less subjective, Vanvaria suggests grounding an organization’s AI usage framework on existing regulations and recognized standards and guidelines, such as the NIST AI Risk Management Framework (AI RMF) and the European Union AI Act. "Quantify confidence in AI solutions through metrics and benchmarking when possible," says Vanvaria. "It’s also essential to make sure AI solutions are continuously monitored, and that humans are ultimately accountable for AI outcomes."

5 Steps to Ensure Responsible AI Use

As your organization creates an AI strategy, here are five steps you can take right now to help ensure responsible use:

 

1. Take a "responsible AI by design" approach to mitigate risks. Weave responsible AI principles into your overall framework, integrating clear boundaries and priorities into your development lifecycle. "For example, create technical controls for development teams, conduct impact assessments and do regular fairness testing," says Vanvaria. "Orchestrate all these tasks with an operating model that works for your organization, with the right roles coming together at the right times."

 

2. Establish a responsible AI framework grounded on industry standards. Develop a deep understanding of existing and emerging industry standards for AI. "Make sure your AI framework takes different AI usage patterns into account," says Vanvaria. "For example, using enterprise ChatGPT versus developing GenAI internally are different types of AI use.

 

3. Invest in technology capabilities for continuous monitoring. Set up systems that will monitor your AI models and data sets constantly, checking for inconsistencies, bias, and anomalies that could indicate a cybersecurity threat. "Once your models are operationalized, how are you going to have controls that will ensure that model and data drifts are not happening?" says Kapoor. To offset risks, build technical guardrails that highlight problems and train your algorithms to minimize bad output. Some examples include ModelOps platforms, automated testing, and other monitoring solutions.

 

4. Work to ensure ongoing transparency and accountability. At every level, keep the lines of communication open to help ensure trust in AI systems. "Inform users that they may be interacting with AI systems, explain how decisions are being made by the AI system and leverage confidence scores and human-in-the-loop to evaluate AI system decision-making," says Vanvaria.

 

5. Create a rigorous training program anchored in real-world scenarios. Build a culture of awareness in your organization, with AI training sessions that consider real scenarios of what could go wrong—and how to mitigate those risks. "The more hands-on you can make the training, the less anxiety employees will have," says Kapoor. "And the more AI tools you can give them access to, the more they will know what to expect and how to add value to the organization."

 

The Future of AI Governance

As businesses continue to integrate AI at every level, successful governance will depend on making sure legal, compliance risk, IT, and business leaders have a seat at the table when making decisions. "Because of the enhanced risks of AI, they need to act in collaboration to help ensure that every angle is understood and addressed," says Kapoor. Many enterprises and large corporations are adopting a hub-and-spoke model for AI use across sites and branch offices. "Corporations need to have some kind of central governance to make sure all these pieces of the puzzle are fitting together well," says Kapoor.

 

While it might seem ironic, AI itself could be a helpful tool for AI governance. Algorithms can be used to test each other for bias and errors, and with an exponential rise in AI-related cybercrime, organizations may be wise to use AI-powered cybersecurity tools to detect malicious intent. Still, keeping a human in the loop will remain a crucial component of any responsible AI framework. "It’s important to keep human oversight front and center," says Vanvaria. "It’s part of maintaining transparency, which is a key component for building trust in AI systems."

Summary

Employees are undoubtedly concerned about the disruption and job displacement. It’s important to have strong governance, bringing the right stakeholders together from the start. This underscores the significance of tackling AI's ethical challenges head-on, ensuring responsible use through collaboration and clear frameworks to mitigate risks and maintain trust in AI systems.

About this article

Authors

Related articles

Five key steps for C-suite execs navigating AI deployment challenges

Unlock GenAI’s potential in your C-suite and deploy GenAI at scale in your organization to gain a competitive edge. Act now for a transformative future.

Four actions to pioneer responsible AI in any industry

Leaders in tech must adopt ethical AI frameworks to ensure responsible innovation. Learn more.

Responsible AI means finding the balance between risk and reward

Understand the key challenges, potential risks, and strategies for adopting AI responsibly with these practical guidelines

    Contact us