woman walking while browsing tablet

How the challenge of regulating AI in healthcare is escalating

Artificial intelligence is forcing healthcare regulators to play catch-up and reimagine the regulation rule book.


In brief

  • Artificial intelligence has the power to revolutionize healthcare, but regulating it is proving a complex and sensitive challenge.
  • Existing models of regulation are designed for ‘locked’ healthcare solutions, whereas AI is flexible and evolves over time.
  • Regulators must catch up if they are to provide the certainty needed to unlock AI’s full potential, while fully protecting the rights of the individual.

Artificial intelligence (AI) has the potential to transform business operations across every sector of the global economy. But nowhere are the benefits of AI more apparent than in the heavily regulated healthcare industry, where the technology is poised to save and transform lives on a remarkable scale.

Healthcare AI applications, known as software as a medical device (SaMD), already approved by regulators, offer a glimpse of what’s in store. As part of a US study, an AI algorithm trained to analyze mammograms, for example, achieved a 9.4% increase in breast cancer detection compared with human radiographers, as well as a 5.7% reduction in false-positive diagnoses.1

Other areas of medicine in which the efficacy of SaMD is being explored include dermatology, radiology, surgery, disease diagnosis, pharmacy and even psychiatry, where chatbots are being developed to automatically diagnose conditions such as anxiety and depression.

Al is also likely to drive emerging fields of healthcare, such as personalized medicine, in which it helps to create bespoke treatments based on an individual patient’s DNA.

AI doesn’t just analyze and act on big data at speed and at levels of accuracy unachievable by humans. Machine learning can also be built into AI algorithms, enabling them to learn from their mistakes, evolve and improve their performance.

Protecting patients and nurturing innovation

The healthcare sector is already one of the most heavily regulated on the planet, from doctors requiring licenses through to equipment standards and rigorous clinical trials for new drugs. SaMD is no exception. While the benefits of healthcare AI are great, patients still need protection from defective diagnosis, unacceptable use of personal data and the elimination of bias built into algorithms.

The regulation of healthcare AI, however, is still in its infancy and regulators are playing catch-up. While both the EU and US have taken tentative steps in this area — signaling the need for regulation and issuing proposals — there are still no concrete laws in place. One of the primary reasons for this is the complexity involved in regulating such a dynamic technology.

Prof. Dr. Heinz-Uwe Dettling, Partner, Ernst & Young Law GmbHand EY GSA Life Sciences Law Lead, explains that healthcare AI and SaMD require a rewrite of the regulatory rule book. He says this is because existing regulatory frameworks do not allow medical devices to change without first undergoing a drawn-out re-authorization process which threatens to stifle adoption and innovation. By its nature however, machine learning wants to learn from data and improve its performance over time.

“This is what’s known as the ‘locked versus adaptive’ AI challenge,” he says. “The efforts of the regulators are required, but the regulation at their disposal was never designed for a fast-evolving technology like AI.”

The US Food and Drug Administration (FDA) has responded to this challenge by exploring a new approach of predicting the route in which AI may develop, a “predetermined change control plan”. As Kynya Jacobus, Senior Manager, Life Sciences Law Ernst & Young Law GmbH explains: “The basic idea is that as long as the AI continues to develop in the manner predicted by the manufacturer it will remain compliant. Only if it deviates from that path will it need re-authorization.”

Filling the regulatory void

The European Commission’s (EC) proposed Artificial Intelligence Act, published in April 2021, aims to fill the regulatory void, creating the first legal framework on AI — turning Europe into what the Commission describes as “a global hub for trustworthy artificial intelligence.”2

The EC says its proposals aim to “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.”

The proposals identify and categorize four levels of AI risk: unacceptable risk, high risk, limited risk and minimal risk. Healthcare AI applications would generally fall into the high-risk category and would need to fulfill the following criteria to achieve regulatory approval:

  1. Adequate risk assessment and mitigation systems
  2. High quality of the datasets feeding the system to reduce risks and discriminatory outcomes
  3. Logging of activity to ensure traceability of results
  4. Detailed documentation providing all information necessary on the system and its purpose, for authorities to assess its compliance
  5. Clear and adequate information to the user
  6. Appropriate human oversight measures to reduce risk
  7. High level of robustness, security and accuracy

Under the EC’s proposals, an independent notified body would be responsible for ensuring an AI product complies with general requirements, such as stating its intended purpose, AI accuracy and whether training data is reliable, representative and used in sufficient quantity.

Dettling explains that the EC balances an ethics and transparency-based approach with the very real need to encourage and facilitate innovation. For example, the proposed regulation would rely on anonymized, pseudonymized or encrypted patient data so that AI applications had access to validated data without breaching patient privacy.

The Commission’s far-reaching proposed law is likely to prove controversial, with some commentators predicting a legislative battle lasting at least until 2022. The official Commission timeline includes a typical two-year period for application following adoption and publication of the final regulation, meaning new EU requirements might not come into force until 2024.

Building a SaMD framework

The FDA’s efforts to regulate in this area have resulted in its Artificial Intelligence/Machine Learning (AI-ML)-Based Software as a Medical Device (SaMD) Action Plan, which was published in January 2021.

The FDA’s action plan aims to create a framework that would “enable the FDA to provide a reasonable assurance of safety and effectiveness while embracing the iterative improvement power of artificial intelligence and machine learning-based software as a medical device.”

The FDA’s action plan focuses on how best to integrate SaMD regulation into its existing framework for medical devices. The action plan identifies five areas of focus, each with actions the FDA intends to take:

  1. Further developing the proposed regulatory framework, including through issuance of draft guidance on a predetermined change control plan (for software’s learning over time) 
  2. Supporting the development of good machine learning practices to evaluate and improve machine learning algorithms
  3. Fostering a patient-centered approach, including device transparency to users
  4. Developing methods to evaluate and improve machine learning algorithms
  5. Advancing real-world performance monitoring pilots

While not as far-reaching as the European Commission’s proposals, the FDA action plan shares fundamental themes, which go to the heart of AI regulation. Perhaps one of the most important areas of focus is addressing the risk of in-built AI bias.

Bias occurs when unrepresentative training data is given to an AI algorithm to help it learn. In a healthcare setting, this could mean training data that does not represent diverse populations, with outcomes skewed towards a particular group in society or omitting a particular demographic based on race, ethnicity, or socio-economic status.

The dangers of in-built bias and black box AI

“Training data can be a major source of problems,” says Dr. Dirk Tassilo Wassen, Senior Manager, Tax Technology and Transformation, Ernst & Young GmbH WPG. “The AI doesn’t choose its training data. This is done by professionals who pre-select data and feed it into the algorithm. At the moment there are no official regulatory guidelines around the use of training data.”

“So, what we need is a far more transparent process, similar to clinical trials, in which manufacturers engage in full and frank disclosure showing the attributes of training data they have used and how their AI works.”

A prime example of in-built AI bias was revealed in a study published by Science in 2019, which showed that a healthcare prediction algorithm used by hospitals and insurance companies throughout the US to identify patients in need of “high-risk care management” was far less likely to nominate black patients.3

The study found that this algorithm used healthcare spending as a proxy for determining an individual's healthcare need. But according to Scientific American, the healthcare costs of unhealthier black patients were on par with the costs of healthier white people, which meant they received lower risk scores, even when their needs were greater.4

Another key theme highlighted by both the FDA and EC is the need for AI transparency. SaMDs may be able to undertake incredibly complex calculations, often beyond the capability of humans, but regulators are likely to insist manufacturers explain how these devices arrive at decisions, so a suitable level of oversight can be maintained. This is known in the AI industry as the black box challenge.

“AI is not infallible,” says Dettling. “There is a famous example in which an AI was asked to distinguish between photographs of wolves and huskies. The AI made the right judgment the majority of times, but with some notable exceptions.

“When the data scientists looked more deeply at how the AI had reached its decisions, they found that it wasn’t analyzing the physical attributes of each animal — instead, it was basing its decision on whether there was snow in the background of the picture. It’s easy to understand how flawed this approach is and how AI can select the wrong criteria and develop the wrong rules when making a decision.”

Nascent efforts to regulate artificial intelligence aren’t isolated to the US and the EU. Since early 2016, many national, regional and international authorities have begun adopting strategies, action plans and policy papers on AI.

The UK government, for instance, has issued guidance on the “responsible design and implementation of AI systems.”

Both Jacobus and Dettling agree that international standardization of AI regulation will be increasingly likely should the EC’s proposed legislation be enacted. Their sense is that the EC’s far-reaching legislation will set the standard globally, with the FDA incorporating and adopting similar key points in its existing medical device regulatory framework.

Taking a lead from existing healthcare regulation frameworks

The good news for regulators tackling this green-field technology globally is that easily transferable examples of best practice regulation already exist within the healthcare sector.

“All the signs suggest that, in the future, AI regulation will closely follow the existing framework for medicinal products” says Dettling. “Take clinical trials, for example. Regulators insist upon a sponsor to provide a clinical trial protocol, which includes a plan for statistical assessment and a clear endpoint — the medicine’s intended purpose and details of what patients are included and excluded from the trial. There’s a real sense that AI regulation will also develop in this direction. The EC’s proposed Artificial Intelligence Act requires a technical documentation that includes the intended purpose of the AI, the methods and steps performed for the development of the AI system, the data requirements in terms of datasheets describing the training methodologies and techniques as well as the training data sets used, including information about the provenance of those data sets, their scope and main characteristics, how the data was obtained and selected, labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection), etc.”

When considering the opportunities and challenges posed by AI in the healthcare sector, it is important for regulators and legal practitioners alike to appreciate that AI is a completely new, complex and dynamic field, and for the time being there are likely to be significant regulatory gaps and uncertainties to navigate.

Jacobus explains that companies and regulators are currently adopting an ad hoc, case-by-case approach. “Medical Device manufacturers are having multiple pre-marketing authorization conversations with regulators which creates inconsistencies in the overall approval process industry-wide,” she says.

“What the product is going to do, how it’s going to do it, and how risk is going to be controlled and minimized. Adopting this approach for individual products may be a suitable short-term fix, but it clearly won’t be a sustainable model as the SaMD industry increases in scale.”

Pioneering regulation in this area will undoubtedly increase the compliance burden on legal teams, but it will also provide much-needed clarity, reduce the risk of litigation for compliant organizations and provide SaMD manufacturers with the confidence they need to innovate and leverage AI in the healthcare sector to its maximum extent.

EY member firms do not practice law where not permitted by local law or regulation.



Summary

The use of AI is already transforming healthcare, improving diagnostic accuracy and opening up a whole world of medical possibility. However, regulation of AI and machine learning needs to play catch-up so that it doesn’t hold back technological advances. Clear and flexible regulation will help put this groundbreaking tech, and companies who create and utilize it, on a much firmer legal footing.

About this article

Authors

Related articles

The General Counsel Imperative: How can you evolve entity management into effective governance?

The 2021 EY Law Survey finds the challenges to effective legal entity management. Learn more.

Mike Fry + 2

The General Counsel Imperative: How does contracting complexity hide clear profitability?

The 2021 EY Law Survey uncovers the requirements that contracting teams should address to better support business priorities. Learn more.

The General Counsel Imperative: How do you turn barriers into building blocks?

The 2021 EY Law Survey identifies challenges impeding the ability to achieve business priorities. Re-thinking current initiatives is key.