1. Define AI for your company and how to monitor responsible AI outcomes
Responsible AI outcomes are predicated on your company’s unique business needs. Define AI for your company with concrete guidance and examples. Maybe you will use more traditional machine learning models, maybe your use cases will increasingly incorporate GenAI. Either way, defining what AI means to you can prove challenging because there is no consensus regulatory or academic definition, and there are grey areas that may be considered AI, including statistical models.
The definition of what AI means to your company should include an inventory of AI tools that you use, buy and build, with the future in mind. It should address how you will ethically source the data you need and how you will use it. This will be the basis for what you monitor and report.
2. Establish your goals and identity
Risk can be highly variable by industry, location and more: think AI-assisted drug discovery vs. virtually trying out a pair of eyeglasses online. Companies must determine their own risk threshold to influence their AI governance framework, which can be stratified:
- Musts: Fulfill legal and regulatory requirements
- Shoulds: Employ practices that impact and benefit both your business and society
- Coulds: Employ practices that serve societal interests but may not necessarily positively impact the business
Even innovation-focused technology companies may need to reconsider their definitions and risk in light of new regulatory standards. An EY team is helping a global software provider conduct an ISO 42001 pre-assessment and integrate European Union Artificial Intelligence Act compliance into the company’s corporate risk framework. By having a more strategic alignment between innovation goals and responsible AI, the software company can better comply with emerging AI and GenAI standards.
3. Determine relative responsible AI risk for different use cases
The challenges when employing artificial intelligence include how data output can be reliable, consistent and ethical to meet or exceed regulatory and commercial return on investment (ROI) thresholds. To mitigate unintended outcomes when humans or machines write code, team diversity is one part of an important system of quality mechanisms. At the same time, consider how fast you can develop your AI-related concept. Make sure governance does not stifle the speed of innovation, and that high-risk applications are properly managed to enable an efficient go-to-market strategy.
In the financial industry, for example, despite longstanding experience with model risk management, many practices are manual and can be ill-suited for the continuous learning capabilities of GenAI. EY teams helped one financial institution update traditional AI governance policies to encompass GenAI and integrate technology solutions. This included an AI-managed system for model validation and testing. The refresh helped the client advance its AI model risk management activity and prepare to scale commercial GenAI applications.
4. Create a sustainable AI strategy
Achieving responsible AI starts with a sustainable AI strategy within your culture, and through education and monitoring tools, with humans at the center. Foster a culture where stakeholders are continuously aware and educated on the importance of responsible and ethical AI practices to align management and teams with evolving AI standards and vision.