Health leaders can’t afford to ignore AI opportunities

Healthcare leaders must navigate the challenges of deploying artificial intelligence (AI) or face the consequences.


In brief

  • A growing number of AI tools are being deployed in health, offering opportunities to improve patient care and outcomes.
  • Healthcare leaders cannot rely on evolving regulation to determine the suitability of AI in their own clinical care setting.
  • Three focus areas can help leaders take a human-centred approach to balancing the potential benefits of AI with risks including around biases and security.

Successful adoption of AI in health is a large prize, offering enhanced access to safe and more cost-effective clinical care. But setting clear expectations and parameters around the appropriate use of AI is critical to avoid missed opportunities or the risk of reputational damage from adverse events. Are public and private health services ready for the opportunities of AI?

Balancing the risks and opportunities of human-centred AI

Around the world, about 50 million people live with epilepsy. Unfortunately for up to 30% of these people, medications don’t effectively control the condition because of a particular brain abnormality. These abnormalities can be corrected through surgery, but the challenge is detecting them – MRI scans of affected brains can often look normal to the human eye. Now a group of researchers in the UK have developed an AI-enabled algorithm that is two-times more effective than using MRIs alone to find these abnormalities.1  The tool is being rolled out to hospitals worldwide, potentially making life-changing surgery an option to more people living with epilepsy.

This is just one example of promising AI use cases – already almost 200 AI tools for medical imaging alone have received FDA approval.2  But while these new AI tools provide significant opportunities to improve safety, productivity and quality of care, not all will deliver better outcomes for health consumers, or mean they can or should be adopted into models of care.

For healthcare leaders considering how to adopt AI in clinical services, a safety-first approach with built-in safeguards can help balance opportunities with risks. Fortunately, healthcare teams have significant experience in understanding and mitigating the risks associated with introducing clinical innovations, and leaning-in to this knowledge will be essential during implementation.

Taking a human-centred approach can help leaders evaluate the merits and limitations of both clinicians and AI tools and better understand the implications for patient safety. For example, AI’s ability to analyse enormous volumes of data could offer accuracy and efficiency improvements in repetitive processes, such as quantifying radiology data to measure the size and volume of lesions on scans – a field known as radiomics. Even the most diligent humans can get fatigued, and AI could reduce risk of errors and free up professionals for more value-adding work. On the other hand, AI struggles when facing new or changing situations, such as detecting less common diseases from a range of differential diagnoses. Earlier hype about AI replacing clinicians has gone noticeably quiet and discussion now is more about how clinicians and machine can work together in partnership.3,4  The focus then is on how AI can augment human capabilities to deliver more and higher quality care, and how clinical roles may evolve accordingly.

In brief: Regulation of ethical AI in health

A quick summary of Ethical AI health

As AI technologies have matured, debate on the merits and ethics of using AI in healthcare has evolved. Some governments and industry bodies have published ethical principles, strategies and guidelines, while publications have explored key challenges including around data quality, potential for bias, privacy, explainability, equity, accountability and liability. The European Union has been most proactive in tackling these with robust standards and draft regulations designed to ensure human-centric AI.5,6

The approach of innovative technology companies to “move fast, break things” is in stark contrast to the principle that doctors should do no harm to patients.7  The risk-to-benefit equation for ethical AI in health is a delicate balancing act of clinician-patient relationships; safety risks stemming from biased technology; data quality and consent; and how accountability is shared between clinicians, executives and AI developers.

Many international health authorities have repurposed pre-existing principles from the regulation of software as a medical device (SaMD).8,9,10  Methods to regulate the use of AI in the production of medicines are also being explored. While not guarding against all risks, these provide a route to market for AI in health and a degree of assurance for health providers that these tools meet some minimum standards and have been approved for use in specific circumstances.

Australian governments have been slower off the mark to engage in public dialogue on the challenges and opportunities inherent in AI in health. New Zealand lags even further, still legislating for a comprehensive therapeutics products regime to regulate medical devices.11  But the lack of public policy is not hindering AI advances. New tools are being developed and adopted in clinical settings across Oceania, underlining the urgency of healthcare leaders to determine their own approach to assessing and deploying AI within their own settings.

Three focus areas guide successful AI deployment in health

As healthcare leaders consider the role of AI in achieving high quality and cost-effective service delivery over the next five to ten years, they must first understand their organisation’s risk appetite and acknowledge risk aversion among stakeholders. This can help them effectively articulate the need to innovate and reform service delivery while balancing risk and maintaining a safety-first culture.

Regulation and ethics can help guide AI deployment (see boxout), but it’s important to note that regulation of AI-enabled devices in healthcare is determined by a specific and narrow criterion: whether the AI tool is safe to place on the market. Leaders will still need to manage risks and challenges within their own environment. Three focus areas can form the basis of a risk assessment framework:

Determining technical readiness: This includes:

  • Assessing the suitability of the AI tool for patient cohorts (with particular attention to Indigenous groups), investigating the data on which it was trained and tested. Are the findings transparent and explainable?
  • Agreeing privacy measures and how sensitive patient data will be securely managed, paying particular attention to consent, patient confidentiality and whether any data is shared beyond the hospital or practice
  • Ascertaining how the AI would fit with existing clinical equipment and technology, such as how it functions with different types of CT or MRI scanners. Or would it perform satisfactorily alongside other innovations such as virtual care?

Clinical governance: Reforms to models of clinical care and the adoption of new technologies can be made or broken by clinical governance, which establishes key parameters for overall quality and safety of care. Leaders should consider how existing clinical governance capabilities can be applied to the proposed technology and how it may impact those using it. This can determine whether re-skilling is needed to help the clinical workforce practise safely alongside AI and collaborate with technical staff. Clinical governance must also consider how consumers would be informed and consent to their personal data being used in new ways.

Accountability: Where does the buck stop when something goes badly wrong? A health leader regularly takes calculated risks, but only after they and their team have analysed the likelihood of various pitfalls, how they would be mitigated and the implications for the organisation’s reputation amongst stakeholders. When adopting high-profile new technologies such as AI into clinical care, there will be extensive, and very public, scrutiny if something goes awry. It’s important to establish risk tolerances upfront with the governing board or other overarching authority. They would rightly expect that a leader has evaluated the risks involved and associated responsibilities, then prepared a comprehensive framework to manage those risks in line with best practice globally.

Seizing the AI prize

Driving change in healthcare is not for the faint-hearted. Leaders must balance opportunities with risks and competing priorities while juggling constrained budgets, complex trade-offs, vocal stakeholders, and ever-rising expectations from health consumers. As healthcare leaders pursue AI-enabled change, a focus on technical readiness, clinical governance and accountability can keep their eyes on the prize.

EY support is available to health leaders and their teams to help establish a framework for safe deployment of AI into clinical care which:

  • Contextualises ethical AI
  • Identifies strong procurement and clinical governance procedures
  • Upskills your workforce
  • Safeguards patient data
  • Recommends interoperability and audit
  • Reduces legal exposure
  • All overseen by robust governance and clear lines of accountability.


Summary

AI offers many promising applications within clinical care, which could deliver life-changing health treatments and better patient outcomes. Healthcare leaders will need to carefully balance these opportunities with myriad challenges including biases, security, privacy and integration. A three-pronged assessment framework can help guide successful AI adoption while mitigating the risk of adverse consequences.

About this article