Part of a broader suite of EY tools, techniques and enablers designed to assist in the responsible development and use of AI, the global responsible AI framework is a flexible set of guiding principles and practical actions.
Multi-disciplinary EY teams consisting of digital ethicists, IT risk practitioners, data scientists and subject-matter resources harnessed the global responsible AI framework to evaluate the biopharma’s responsible AI principles, as well as how these had been rolled out and understood across the business.
We overlaid the global responsible AI framework on the template that the client had already created, interviewing key stakeholders and reviewing relevant documentation.
“We invested time in understanding the client’s environment, and our experience in AI governance meant we were also able to ask the right questions at the right time,” says the EY UKI Client Technology & Innovation Officer, Catriona Campbell.
We assessed how successfully the business had mitigated the risks of AI throughout its lifecycle, from problem identification through to modelling, deployment and ongoing monitoring.
To determine if the client had developed and implemented AI in line with its responsible AI principles, we also evaluated a sample of key AI projects, including forecasting, adverse event tracking and early disease detection.
Our review found that the biopharma was not always managing project-specific AI risks in line with its responsible AI principles. “The EY audit highlighted a number of gaps in our approach, allowing us to set minimum requirements for business teams working with AI, which we’re already working toward,” says the biopharma company’s AI Governance Lead.