Given AI’s significant potential to fundamentally transform the business landscape, organizations must have a comprehensive plan and implement a systematic approach for the ethical and compliant use of AI. Here are six ways organizations can take an integrity-first approach to using AI:
1. Assess the AI use strategy
Whether the organization has already implemented AI or plans to do so in the near term, it’s important to understand its current maturity in managing the use of AI. An AI maturity assessment can help to identify critical gaps. For example, when a global pharmaceutical company conducted an AI use compliance assessment, it learned that one of its largest gaps was the absence of a consistent AI governance framework.
2. Develop a formal AI use policy and the means to implement it
Governance is the anchor to enable secure, sustainable, responsible and transparent use of AI. While creating an AI governance framework can be useful, these are often voluntary or inconsistently applied. A more constructive approach is to develop a formal — and enforceable — AI use policy, accompanied by the appropriate means to implement and monitor it. The policy should give specific attention to defining ethical AI principles for the organization; establishing guidelines to respect people’s rights, safety and privacy; ensuring the fairness, accuracy and reliability of AI output; and protecting the security of underlying data and models.
3. Assemble a cross-functional team
For an AI use policy to be most effective, multiple stakeholders across the organization (IT, privacy and information security, compliance, legal, innovation, finance and internal audit functions) need to work together to assess AI use cases, associated risks and appropriate guardrails. Organizations should establish a governance committee to ensure various aspects of AI risks are owned by the relevant teams and have implications for different use cases. Outside the governance committee, the cross-functional team can monitor the consistent application of the governance and risk management approach across the organization. Each team plays a different role in the AI lifecycle and use management. It is only by working together that the relevant AI risks can be managed effectively from end to end.
4. Build a regulatory and litigation response plan for AI
A regulatory and litigation response plan for AI is the next stage of governance planning. With legal and regulatory environments becoming more challenging, especially pertaining to AI, organizations should be prepared with a response plan to manage such crisis events. This is especially important in the event of an AI-wishing or AI-washing claim against your organization. Should an issue arise, the organization’s use of AI will be heavily scrutinized. Organizations need to know who needs to be involved, where the data lives and who is responsible for it. They’ll have to go through a full response program to collect the relevant artifacts and demonstrate from a technical perspective how the organization is using AI. It’s an expensive process that involves hiring lawyers, reviewing models and records, and being able to present all these records to the regulator. It’s important to recognize that this isn’t a traditional subpoena request. In traditional subpoenas, organizations may need to be able to produce emails. In AI litigation, they need to be able to produce algorithms.
5. Optimize data governance and processes
In the EY Global Integrity Report 2024, executives cited inconsistent or incomplete data feeds into AI models as their number one challenge in deploying AI within the compliance function. For legal and compliance professionals — and, arguably, the workforce at large — to trust the data, organizations need to have a clear and complete understanding of their data. This should include data mapping and lineage to know where it comes from, as well as its level of quality and limitations.
6. Build an inventory of all AI tools in use
Organizations should have, or build, an inventory of all AI and machine learning (ML) tools in use. As the organization’s AI capabilities mature, it can focus on building a scalable, flexible, secure infrastructure that can safely manage a portfolio of AI algorithms.
The speed at which AI is advancing is only accelerating. Given all of the concepts and components that organizations must consider to not only implement AI but also instill confidence in it, organizations must develop a cohesive integrity-first approach to AI. Ad hoc efforts to chase risks and challenges after the fact will not suffice.
A robust AI use strategy set upon a strong governance framework with clearly defined policies and procedures, controls that align to the governance protocols, data governance and processes, and a cross-functional team that can not only drive the deployment of AI but also champion a culture of integrity around AI will all contribute to an integrity-first AI agenda that both harnesses the full potential of AI and mitigates the risk.