AI Engineering, Female STEM engineer using artificial intelligence to design in the lab.

Four actions to pioneer responsible AI in any industry

Leaders in any sector who want to pursue new technology need to consider developing responsible and ethical AI frameworks.


In brief
  • A key step in responsible AI is defining its applications in your business today and in the future.
  • Risk tolerance and AI governance may be unique to regulatory, geographical and industry considerations.
  • EY-Parthenon teams can help companies with sustainable and ethical AI strategy including continuous education, robust governance and process management.

With the rapid proliferation of artificial intelligence (AI), companies are faced with a key question: how do we harness the full potential of AI while ensuring responsible uses and outcomes?

How AI and generative AI (GenAI) are defined, and the risk that may ensue, depend on the sector, industry, company and teams involved. As businesses grapple with AI definitions and risk models, regulators in Asia, Europe and the United States are quickly setting varied goal posts for the use of AI-generated content and corporate accountability for missteps. In the US, the National Institute of Standards and Technology (NIST) is developing guidelines and leading practices on secure, trustworthy AI systems with input from the U.S. AI Safety Institute Consortium of more than 200 companies, including EY.

Yes, many regulated businesses, especially those in the financial and health sectors, are accustomed to risk models. And in a wave of “do the right thing” thinking, many companies are not only creating new products and services to meet the demand for this ethos, they are also adopting internal compliance metrics to meet environmental, social, governance (ESG) and sustainability standards. With machines that are continuously learning, however, GenAI may create unexpected outputs. While financial and health organizations were early in navigating responsible AI and the dynamic regulatory landscape, other industries with less regulatory requirements may just now be contemplating how to transform their businesses to govern large language models, machine learning and GenAI.

framework for responsible AI can instill confidence and can enable your organization to compete, protect and accelerate an AI strategy. For leaders in any industry, there are four actions to get right to ensure the responsible use of AI, and five areas of challenge to consider on the road forward. First, the actions to take now:

1. Define AI for your company and how to monitor responsible AI outcomes

Responsible AI outcomes are predicated on your company’s unique business needs. Define AI for your company with concrete guidance and examples. Maybe you will use more traditional machine learning models, maybe your use cases will increasingly incorporate GenAI. Either way, defining what AI means to you can prove challenging because there is no consensus regulatory or academic definition, and there are grey areas that may be considered AI, including statistical models.

 

The definition of what AI means to your company should include an inventory of AI tools that you use, buy and build, with the future in mind. It should address how you will ethically source the data you need and how you will use it. This will be the basis for what you monitor and report.

 

2. Establish your goals and identity 

Risk can be highly variable by industry, location and more: think AI-assisted drug discovery vs. virtually trying out a pair of eyeglasses online. Companies must determine their own risk threshold to influence their AI governance framework, which can be stratified:

  • Musts: Fulfill legal and regulatory requirements
  • Shoulds: Employ practices that impact and benefit both your business and society
  • Coulds: Employ practices that serve societal interests but may not necessarily positively impact the business 

Even innovation-focused technology companies may need to reconsider their definitions and risk in light of new regulatory standards. An EY team is helping a global software provider conduct an ISO 42001 pre-assessment and integrate European Union Artificial Intelligence Act compliance into the company’s corporate risk framework. By having a more strategic alignment between innovation goals and responsible AI, the software company can better comply with emerging AI and GenAI standards.

 

3. Determine relative responsible AI risk for different use cases

The challenges when employing artificial intelligence include how data output can be reliable, consistent and ethical to meet or exceed regulatory and commercial return on investment (ROI) thresholds. To mitigate unintended outcomes when humans or machines write code, team diversity is one part of an important system of quality mechanisms. At the same time, consider how fast you can develop your AI-related concept. Make sure governance does not stifle the speed of innovation, and that high-risk applications are properly managed to enable an efficient go-to-market strategy.

 

In the financial industry, for example, despite longstanding experience with model risk management, many practices are manual and can be ill-suited for the continuous learning capabilities of GenAI. EY teams helped one financial institution update traditional AI governance policies to encompass GenAI and integrate technology solutions. This included an AI-managed system for model validation and testing. The refresh helped the client advance its AI model risk management activity and prepare to scale commercial GenAI applications.

 

4. Create a sustainable AI strategy 

Achieving responsible AI starts with a sustainable AI strategy within your culture, and through education and monitoring tools, with humans at the center. Foster a culture where stakeholders are continuously aware and educated on the importance of responsible and ethical AI practices to align management and teams with evolving AI standards and vision.

  • Governance: Implement robust enterprise-level controls. Delineate roles clearly, update organizational policies.
  • Process: Construct processes to manage risks and streamline reporting to regulators and other stakeholders. Establish continuous process flows like record keeping and automated logging for heightened accountability.
  • Monitoring and documentation: Design an ongoing monitoring framework to detect and mitigate risks as AI systems evolve over time. This will help maintain adherence to established risk thresholds and prevent unintended outcomes. Ensure robust documentation of system performance that can be referenced in case of policy breaches.

EY teams helped a global biopharma company develop an AI governance framework that embraced responsible AI principles including transparency, fairness and human-centricity. EY teams used their proprietary responsible AI framework to conduct a comprehensive risk assessment and evaluate the biopharma’s existing AI risk management template, responsible AI principles and how they had been rolled out and understood across the business. While the company was initially ahead of the curve, it was not always managing project-specific AI risks in line with its responsible AI principles. Its AI approach also needed to better anticipate future compliance and regulation.

As businesses navigate the complexities and advancements of AI, they can use a responsible AI framework to instill confidence and meet the obligation to anchor this powerful technology in ethics and responsibility. Because achieving responsible AI is an ongoing challenge. The EY organization has identified five challenging areas where leaders can focus on risk management:

  1. Data: Lack of preparation of organizational data, risking access by external parties, such as outside collaborators and sources.
  2. Performance: Decentralized AI policies that widen access to the tools too soon, before proper training and governance are established.
  3. Algorithms: Complex technologies that may allow unique AI outputs for individual prompts, without tracking and retaining AI training data.
  4. Design: Inscrutable practices, such as black-box AI solutions and data changes within models.
  5. Training: Lack of thorough knowledge-led training and comprehensive understanding of GenAI and its associated risks can lead to inadvertent misuse.

For organizations ready to embrace this continuous and collaborative journey, EY-Parthenon teams can help define what AI means, develop a responsible AI framework and collaborate to implement AI and GenAI strategies. Learn more about responsible AI and AI strategy development.

Thanks to EY-Parthenon colleagues Lori Kim, Sophie Chen and Caitie Duffett for their contributions to this article.

Summary

As the use of artificial intelligence rapidly expands, companies need to think about responsibe AI as they leverage its potential while adhering to global regulations and ethical considerations. This involves defining AI specifically for each company, assessing use cases for risk and ROI, and creating sustainable AI strategies that include education, monitoring and robust governance. The EY-Parthenon practice has helped companies across industries use its propriety ethical framework to build confidence in their AI strategy by incorporating responsible AI practices that emphasize transparency, fairness and human-centricity. Reach out to learn more.

Related articles

How AI in customer service can turn pain points into wow moments

AI in customer service can provide data-driven customization, speed and innovation that helps sales and marketing. Read more.

GenAI risks and challenges for the economy

Learn about the expected impact of GenAI on the labor market by exploring our four key findings regarding GenAI use.

How GenAI strategy can transform innovation

Companies considering or investing in a transformative GenAI strategy should tie generative artificial intelligence use cases to revenue, cost and expense. Learn more