ey university professor explaining a web development project

Era of inclusivity: understanding bias in an AI world


Explore AI challenges and opportunities as organizations set new standards, tackling deepfakes, bias, and promoting inclusion and ethics.


In brief
  • AI brings both remarkable advancements and risks like bias, requiring responsible adoption across sectors.
  • The key to mitigating risks such as bias and deepfakes is governance, transparency, and collaboration between the public, private, and academic sectors.
  • Addressing fears around AI, especially in marginalized communities, involves retraining workers and promoting ethical AI development.

Special thanks to Damaris Fynn, EY Americas Risk Data and AI Leader, and Fay Ruby, Senior Ernst & Young LLP, for contributions to this content.

Artificial intelligence (AI) can be a double-edged sword, presenting both remarkable advancements and potential risks. Industry leaders are reaping the benefits of enhanced productivity and improved decision-making capabilities, and by responsibly mitigating the risks of deepfakes, hallucinations and bias, as well as promoting inclusion, they are setting new standards for organizations. All stakeholders — companies, boards, investors, consumers and regulators — must understand these challenges and opportunities to adopt AI with appropriate guardrails.

A unified effort from the public, private and academic sectors is essential for responsible AI adoption through thoughtful regulations and ethical practices. This approach can enhance business outcomes, increase agility and benefit society. To overcome widespread fears, especially in marginalized communities, it is necessary to ensure the workforce ecosystem embraces AI while measuring, managing and monitoring risks. Implementing rigorous risk management and governance can transform fear into confidence.

Before delving into how to move forward, it is imperative to discuss the fears around AI and how to address them effectively.

AI anxiety

The rapid advancement of generative AI is a catalyst for innovation yet continues to ignite fear. While AI is enhancing productivity and decision-making, it creates public concern regarding job security, privacy and ethical conduct. Research conducted by Ernst & Young LLP (EY US) highlights fear among employees about the impact of AI on their careers. A significant majority worry AI may diminish their financial security and hinder their professional advancement, with 72% concerned about salary reductions, 67% fear missing promotions due to a lack of AI proficiency, and 66% worry about falling behind in workplace technology adoption.

The fear that AI might replace jobs is understandable, but history suggests that technological disruption often leads to economic expansion and the creation of new sectors. In past industrial revolutions, concerns about job displacement due to automation prevail. However, innovative technologies transform various industries and create employment opportunities. This technological disruption boosts economic growth, improves production efficiency, lowers product costs and expands global trade.

Despite the overall trend of innovation creating more jobs than it displaces, there remains a segment of workers whose roles may become redundant or transformed. For individuals in this category, the impact of technological disruption can be daunting. It is imperative for policymakers, industry leaders and educational institutions to ensure that no one is left behind as AI transforms industries. Addressing this challenge requires a concerted effort from these stakeholders to retrain workers in adjacent fields and implement public policies that facilitate a smoother transition for those affected.

This multifaceted approach includes three key steps:

  1. Support entrepreneurship and small business development to diversify local economies and create alternative employment opportunities.
  2. Create specialized training programs that teach trade skills and other jobs that will require less AI proficiency, with a focus on job placement services to connect displaced workers with new opportunities.
  3. Provide counseling, financial assistance, mental health and other necessary support to ease the transition.

Career fears are only one facet of a broader spectrum of concerns. Recent surveys reveal widespread fear about AI’s potential for misuse. This includes the creation of deceptive deepfakes, AI-generated hallucinations and the amplification of biases.

Deepfakes

Deepfakes are synthetic media where AI replaces a person’s image or video with another person’s likeness. These fake voices and images can be difficult to detect. There have been instances of fraudsters using digitally cloned voices of executives to order financial transfers.

To address these challenges, governance, transparency and disclosure laws could require companies to label deepfakes and digitally manipulated content to inform viewers of their authenticity. Promoting the adoption of digital watermarking and authentication can verify content origins and detect manipulations. Strengthening data privacy regulations can safeguard individuals’ personal information from exploitation in deepfake generation or targeted manipulation campaigns. Furthermore, enacting laws that impose legal consequences, such as fines or penalties for maliciously creating or distributing deceptive content, can serve as deterrents.

Hallucinations

Inaccuracies generated by AI, often referred to as hallucinations, pose significant risks to organizations. These inaccuracies can be prevalent when relying on publicly available AI chatbots for research or work projects, on occasion leading to quotes and citations attributed to nonexistent sources. Boards, companies and investors can mandate that AI systems provide clear explanations of how content is generated to ensure accuracy. Transparency in this context is crucial for companies to assess the credibility of information, establish standards for source verification and fact-checking, and identify misleading content. Additionally, regular audits and compliance checks are essential to uphold standards of accuracy, reliability and transparency.

To address these challenges, companies are increasingly adopting retrieval-augmented generation (RAG) to enhance the accuracy and reliability of AI-generated content. RAG enables an AI system to connect to organizational and external databases for querying information. Retrieval involves an information retrieval system that provides grounding data. This system retrieves relevant information from external databases or content sources. “Augmented” refers to the integration of retrieved information into the generative process. The retrieved data acts as a supplement to the large language model (LLM), allowing it to generate more accurate and contextualized responses. Generation involves the LLM itself, and it uses the retrieved information to formulate a response. RAG provides transparency and traceability, allowing users to see the sources of information used in the generation process. Major search engines, customer support chatbots and media companies are already leveraging RAG to improve search result relevance, provide accurate customer support and assist in content creation. By grounding responses in accurate data, RAG significantly reduces the likelihood of hallucinations, while continuous learning from user feedback enhances accuracy and reliability over time.

Bias

Another AI risk to consider is bias, which occurs when human biases affect the training data or AI algorithms. Biases in AI can stem from societal structures, data collection methods or inherent limitations within the AI technologies themselves. When organizations deploy AI systems in real-world scenarios, these biases may become more pronounced due to user interactions, demographic representations and misconceptions about AI neutrality. Historical contexts, data collection methods and the inherent characteristics of AI technology can all contribute to biased outcomes. Therefore, it is crucial for data scientists, AI developers, organizational leaders and regulators to recognize and address these biases to develop fair and effective AI tools.

Further analysis of AI systems reveals bias in action, such as image sets generated for high-paying jobs depicting lighter skin tones, while prompts like “social worker” depict darker skin tones. Gender bias is also evident, with women appearing three times less frequently than men in most occupation categories, except for roles like housekeeper and cashier, where women are overrepresented. These biases highlight the pressing need to address them in AI systems, underscoring broader societal challenges and emphasizing how AI bias intersects with issues faced by marginalized communities. AI bias can significantly threaten individuals’ economic opportunities and social mobility in areas such as scholarship allocations, mortgage lending and hiring practices. For instance, the US Equal Employment Opportunity Commission settled its first-ever AI discrimination lawsuit against iTutorGroup, whose AI hiring tool discriminated based on age.

The growing awareness of the need for ethical standards in AI applications has spurred a broader conversation about ensuring fairness. Concerns about AI’s role in lending practices highlight the potential for biases that could disproportionately affect marginalized communities. Addressing these concerns requires rigorous oversight to foster trust and equity. Boards and investors have a fiduciary responsibility to ensure the responsible implementation of AI, given the significant consequences of bias and potential erosion of shareholder value. Bias in AI can lead to lost revenue, customers and employees, as well as increased legal fees, damage to brand reputation and media backlash.

Opportunities for inclusion

When guided by an AI ethics board, AI can contribute significantly to diversity, equity and inclusion (DEI) efforts. Ensuring inclusivity requires incorporating diverse perspectives in the AI development process. AI can also reveal patterns in organizational practices, such as salary and promotion disparities, and enhance workplace inclusivity through technologies like speech recognition and visual aids. For example, the EY organization has introduced AI technologies that enable content to be heard aloud rather than read, aiding in information processing and supporting neurodivergent employees.

Efforts to promote diversity in AI extend to education and training programs. Initiatives like EY Ripples demonstrate how AI literacy and education can enrich underserved communities. These programs impart knowledge about ethical AI development and inclusive design principles while offering direct learning experiences. By introducing new productivity tools and leveling the playing field, firms can better support underrepresented groups, leading to a more inclusive workforce.

Global companies have announced funding for programs that use AI to promote racial equity, economic empowerment, and access to education and health care. These initiatives also advocate for privacy rights and data protection, particularly for marginalized communities.

Using AI responsibly

Building on foundational DEI principles, a comprehensive, vigilant and purposeful responsible AI framework and strategy ensure that systems are developed and deployed to enable organizations in the new AI era to:

  1. Lead and grow by adopting AI responsibly and fostering trust throughout the workforce ecosystem.
  2. Identify and safeguard AI assets and preserve value generated with AI.
  3. Achieve speed to market while navigating legal and regulatory obligations with confidence.

Leading organizations adhere to stringent data protection standards, protecting confidentiality and aligning with ethical norms and legal rights, which minimize privacy risks and unauthorized data access. Additional components of responsible AI include:

Reliability and security

The design of AI systems meets stakeholder expectations and builds trust through consistent performance. Additionally, AI systems and data are secure against unauthorized access, corruption and adversarial attacks, safeguarding the integrity of both input and output data.

Transparency and explainability

Providing appropriate levels of disclosure about the design, purpose and impacts of its AI systems enables stakeholders to understand, evaluate and correctly use the technology. Enhancing the explainability of AI systems ensures that users can comprehend, challenge and validate the decision-making processes and outputs. This level of clarity is vital for users who need to trust the system but who can also verify its decisions when necessary.

Fairness and compliance

Assessing the needs of all stakeholders and promoting inclusivity requires designing AI systems to have a positive societal impact and prevent biases. This also includes ensuring that all AI applications comply with relevant laws, regulations and professional standards to avoid legal issues and uphold high ethical standards.

AI governance is the cornerstone of deploying AI responsibly. To succeed in this environment and establish trust, organizations must leverage AI responsibly.

What’s next?

The promise and perils of AI make responsible governance and an AI ethics board imperative in shaping a future where technology uplifts rather than undermines society. Addressing risks such as bias, deepfakes and job displacement, and fostering unity among companies, boards, investors, consumers and regulators are vital to establishing trust. Collaboration across public, private and academic sectors is also central to this endeavor. This approach will lessen fears and unlock immense benefits so that AI serves as a catalyst for positive change, enhancing productivity and driving innovation. By implementing rigorous risk management and embracing transparency, we can mitigate risks and ensure that AI serves as a tool for positive transformation.

The views reflected in this article are those of the authors and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.


Summary 

AI offers tremendous opportunity for productivity and decision-making, but also introduces challenges such as bias, deepfakes, and job displacement. To harness its benefits responsibly, organizations are adopting thoughtful governance, transparency, and inclusive practices. Collaboration between the public, private, and academic sectors facilitate ethical AI deployment, ensuring that risks are managed, and the technology drives positive change.

About this article

Related articles

EY Survey: AI adoption among financial services leaders universal, amid mixed signals of readiness

Most US workers are using AI, but just as many fear the technology’s impact on their role

How organizations can stop skyrocketing AI use from fueling anxiety

Artificial intelligence (AI) is here to stay, but concerns remain over job security and regulation. Learning how to use AI responsibly will lower anxiety.