Deepfakes
Deepfakes are synthetic media where AI replaces a person’s image or video with another person’s likeness. These fake voices and images can be difficult to detect. There have been instances of fraudsters using digitally cloned voices of executives to order financial transfers.
To address these challenges, governance, transparency and disclosure laws could require companies to label deepfakes and digitally manipulated content to inform viewers of their authenticity. Promoting the adoption of digital watermarking and authentication can verify content origins and detect manipulations. Strengthening data privacy regulations can safeguard individuals’ personal information from exploitation in deepfake generation or targeted manipulation campaigns. Furthermore, enacting laws that impose legal consequences, such as fines or penalties for maliciously creating or distributing deceptive content, can serve as deterrents.
Hallucinations
Inaccuracies generated by AI, often referred to as hallucinations, pose significant risks to organizations. These inaccuracies can be prevalent when relying on publicly available AI chatbots for research or work projects, on occasion leading to quotes and citations attributed to nonexistent sources. Boards, companies and investors can mandate that AI systems provide clear explanations of how content is generated to ensure accuracy. Transparency in this context is crucial for companies to assess the credibility of information, establish standards for source verification and fact-checking, and identify misleading content. Additionally, regular audits and compliance checks are essential to uphold standards of accuracy, reliability and transparency.
To address these challenges, companies are increasingly adopting retrieval-augmented generation (RAG) to enhance the accuracy and reliability of AI-generated content. RAG enables an AI system to connect to organizational and external databases for querying information. Retrieval involves an information retrieval system that provides grounding data. This system retrieves relevant information from external databases or content sources. “Augmented” refers to the integration of retrieved information into the generative process. The retrieved data acts as a supplement to the large language model (LLM), allowing it to generate more accurate and contextualized responses. Generation involves the LLM itself, and it uses the retrieved information to formulate a response. RAG provides transparency and traceability, allowing users to see the sources of information used in the generation process. Major search engines, customer support chatbots and media companies are already leveraging RAG to improve search result relevance, provide accurate customer support and assist in content creation. By grounding responses in accurate data, RAG significantly reduces the likelihood of hallucinations, while continuous learning from user feedback enhances accuracy and reliability over time.
Bias
Another AI risk to consider is bias, which occurs when human biases affect the training data or AI algorithms. Biases in AI can stem from societal structures, data collection methods or inherent limitations within the AI technologies themselves. When organizations deploy AI systems in real-world scenarios, these biases may become more pronounced due to user interactions, demographic representations and misconceptions about AI neutrality. Historical contexts, data collection methods and the inherent characteristics of AI technology can all contribute to biased outcomes. Therefore, it is crucial for data scientists, AI developers, organizational leaders and regulators to recognize and address these biases to develop fair and effective AI tools.
Further analysis of AI systems reveals bias in action, such as image sets generated for high-paying jobs depicting lighter skin tones, while prompts like “social worker” depict darker skin tones. Gender bias is also evident, with women appearing three times less frequently than men in most occupation categories, except for roles like housekeeper and cashier, where women are overrepresented. These biases highlight the pressing need to address them in AI systems, underscoring broader societal challenges and emphasizing how AI bias intersects with issues faced by marginalized communities. AI bias can significantly threaten individuals’ economic opportunities and social mobility in areas such as scholarship allocations, mortgage lending and hiring practices. For instance, the US Equal Employment Opportunity Commission settled its first-ever AI discrimination lawsuit against iTutorGroup, whose AI hiring tool discriminated based on age.
The growing awareness of the need for ethical standards in AI applications has spurred a broader conversation about ensuring fairness. Concerns about AI’s role in lending practices highlight the potential for biases that could disproportionately affect marginalized communities. Addressing these concerns requires rigorous oversight to foster trust and equity. Boards and investors have a fiduciary responsibility to ensure the responsible implementation of AI, given the significant consequences of bias and potential erosion of shareholder value. Bias in AI can lead to lost revenue, customers and employees, as well as increased legal fees, damage to brand reputation and media backlash.
Opportunities for inclusion
When guided by an AI ethics board, AI can contribute significantly to diversity, equity and inclusion (DEI) efforts. Ensuring inclusivity requires incorporating diverse perspectives in the AI development process. AI can also reveal patterns in organizational practices, such as salary and promotion disparities, and enhance workplace inclusivity through technologies like speech recognition and visual aids. For example, the EY organization has introduced AI technologies that enable content to be heard aloud rather than read, aiding in information processing and supporting neurodivergent employees.
Efforts to promote diversity in AI extend to education and training programs. Initiatives like EY Ripples demonstrate how AI literacy and education can enrich underserved communities. These programs impart knowledge about ethical AI development and inclusive design principles while offering direct learning experiences. By introducing new productivity tools and leveling the playing field, firms can better support underrepresented groups, leading to a more inclusive workforce.
Global companies have announced funding for programs that use AI to promote racial equity, economic empowerment, and access to education and health care. These initiatives also advocate for privacy rights and data protection, particularly for marginalized communities.
Using AI responsibly
Building on foundational DEI principles, a comprehensive, vigilant and purposeful responsible AI framework and strategy ensure that systems are developed and deployed to enable organizations in the new AI era to:
- Lead and grow by adopting AI responsibly and fostering trust throughout the workforce ecosystem.
- Identify and safeguard AI assets and preserve value generated with AI.
- Achieve speed to market while navigating legal and regulatory obligations with confidence.
Leading organizations adhere to stringent data protection standards, protecting confidentiality and aligning with ethical norms and legal rights, which minimize privacy risks and unauthorized data access. Additional components of responsible AI include:
Reliability and security
The design of AI systems meets stakeholder expectations and builds trust through consistent performance. Additionally, AI systems and data are secure against unauthorized access, corruption and adversarial attacks, safeguarding the integrity of both input and output data.
Transparency and explainability
Providing appropriate levels of disclosure about the design, purpose and impacts of its AI systems enables stakeholders to understand, evaluate and correctly use the technology. Enhancing the explainability of AI systems ensures that users can comprehend, challenge and validate the decision-making processes and outputs. This level of clarity is vital for users who need to trust the system but who can also verify its decisions when necessary.
Fairness and compliance
Assessing the needs of all stakeholders and promoting inclusivity requires designing AI systems to have a positive societal impact and prevent biases. This also includes ensuring that all AI applications comply with relevant laws, regulations and professional standards to avoid legal issues and uphold high ethical standards.
AI governance is the cornerstone of deploying AI responsibly. To succeed in this environment and establish trust, organizations must leverage AI responsibly.
What’s next?
The promise and perils of AI make responsible governance and an AI ethics board imperative in shaping a future where technology uplifts rather than undermines society. Addressing risks such as bias, deepfakes and job displacement, and fostering unity among companies, boards, investors, consumers and regulators are vital to establishing trust. Collaboration across public, private and academic sectors is also central to this endeavor. This approach will lessen fears and unlock immense benefits so that AI serves as a catalyst for positive change, enhancing productivity and driving innovation. By implementing rigorous risk management and embracing transparency, we can mitigate risks and ensure that AI serves as a tool for positive transformation.