AI is hardly new news, but its full story is still unfolding. A responsible AI framework can help mitigate AI risks while complying with rapidly emerging global AI regulations. The EU, China and Canada have led the charge with the US’s Biden administration following closely, issuing an executive order (EO) outlining AI regulations in October 2023.
The sweeping EO, which aims to promote the “safe, secure, and trustworthy development and use of artificial intelligence,” will impact organizations across all sectors of the economy, from the most mature AI implementers to first-time adopters.
By following these steps, businesses can evaluate AI risks and build controls across critical business functions.
- Define a responsible AI framework to validate compliance of AI models from design to implementation with a rigorous feedback mechanism.
- Establish an AI operating model, a multidisciplinary AI organization structure comprising business, technology and various compliance functions to implement AI at scale responsibly.
- Employ specialized cybersecurity controls to meet the unique challenges presented by AI systems and mitigate risk to your organization.
- Prepare your data. AI requires vast amounts of unstructured and structured data, and a mature data management program with robust governance is necessary to deploy transformative AI solutions.
- Activate your enterprise across functions, defining roles and responsibilities for each and establishing a process to educate, train and inspire users.
Guiding principles for responsible AI
“Reliability can go two ways. One is your AI system is performing as expected. And the other is that it is able to respond safely to new situations,” says Kapoor. “Companies need to have a strong monitoring framework with the tools and infrastructure necessary to make the platform easy to update and monitor platforms, and also generate reports. Automating analytics and reporting is key. AI is continually learning; it is not one and done.”
The National Institute of Standards and Technology (NIST) will develop the guidelines and leading practices for developing and deploying safe, secure and trustworthy AI systems. In support of these efforts, NIST is establishing the U.S. Artificial Intelligence Safety Institute (USAISI)1, served by the U.S. AI Safety Institute Consortium. The consortium brings together more than 200 organizations, including the EY organization, to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety and preparing the US to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.
Guided by the NIST risk management framework and professional experience, the EY organization developed these basic principles to help our clients build confidence and trust in the evolving understanding, regulation, cross-sector coordination and risk mitigation that will define the future optimization of AI.
- Accountability: Establish clear and delineated internal ownership over AI systems and their impacts across the AI development lifecycle. Open the access pipeline slowly as user success builds.
- Transparency: Communicate openly with users about the purpose, design and impact of AI systems, so that designers and users can evaluate and appropriately deploy AI outputs. Help them appreciate and better understand the benefits and the risks.
- Fairness: AI systems should be designed with consideration for the needs of all relevant stakeholders, with the objective of promoting inclusiveness and positive impact. The broader impact of this technology should fully align with your organizational mission and ethics.
- Reliability: AI systems should meet stakeholder expectations and perform with precision and consistency, remaining secure from unauthorized access, corruption and attacks. If an AI application is behaving unexpectedly and raising questions, it’s best to pull back use immediately for internal evaluation.
- Privacy: Data privacy, including collection, storage and usage, is paramount as AI systems are being deployed to internal members of any organization. A gradual, carefully planned approach to AI access and usage can minimize data risk.
- Clarity: Anyone using AI on behalf of your organization should receive explicit communication regarding potential risks, formal policies and expectations so they are equipped to assess, validate and challenge if necessary.
- Sustainability: The design and deployment of AI systems are compatible with the goals of sustaining physical safety, social wellbeing and planetary heath.
The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.
About EY AI capabilities:
EY professionals guide businesses toward responsible AI use by listening and developing customized options. Learn more by visiting EY.ai