Close up view of woman hand using interactive touchscreen display of electronic kiosk

Shaping the future of life sciences: AI‘s regulatory, risk and technology dimensions

Read in German, Read in French
 

EY’s whitepaper helps organizations navigate the complexities of AI to optimize performance and remain compliant.


In brief

  • A high-tech sector by nature, life sciences stand to benefit from the revolutionary potential of artificial intelligence (AI) – especially generative AI.
  • Regulatory efforts such as the EU AI Act are under way to balance the promise of this new technology with potential new risks.
  • EY’s whitepaper shines the spotlight on regulatory, risk and technology aspects to guide life sciences stakeholders on their path to trusted AI.

Artificial Intelligence (AI) is considered the next big transformational power, with generative AI (GenAI) alone estimated to be potentially more impactful than the development of the personal computer (PC). The resulting opportunities for the life sciences sector are monumental, and the technology has the potential to revolutionize the whole life science value chain.  As with most quantum leaps in technological advancement, ethical considerations, regulatory challenges, and impact to society need to be critically evaluated. Against this background, strategic preparation is crucial to harness AI’s potential while actively managing risks.

EY has produced a whitepaper providing guidance for navigating the evolving landscape of AI in the highly regulated life science industry. With a focus on (possible) upcoming regulations, the revolutionary potential of technology and risk factors, our work is a timely and convenient reference for the various players of a modern sciences sector.


Download the whitepaper: "Shaping the future of Life Sciences: AI's regulatory, risk and technology dimensions"


Close up view of woman hand using interactive touchscreen display of electronic kiosk

1. Regulatory

We discuss the EU Artificial Intelligence Act (AI Act) and its potential implications for Switzerland and explore the possible Swiss approach to incoming legislation. Given the EU’s significance as a major trading partner for Switzerland, it is essential for organizations to understand the implications of the EU AI Act and ensure compliance.

EY’s whitepaper helps stakeholders, including traditional biopharma players and tech companies entering the industry, to prepare for upcoming regulations. Investing now in this topic will enable organizations to ensure compliance while maximizing the benefits of AI for their business.

2. Risk

The EU AI Act addresses risk levels and proposes mechanisms for governing them, prohibiting unacceptable risk, permitting high-risk activities subject to strict compliance, enforcing transparency obligations for limited-risk AI, and allowing minimal-risk AI without restrictions.

For the life sciences industry, dealing as it does with safety-critical applications, compliance with the new regulations can be challenging and costly. Companies must integrate AI governance and risk assessments into their organizational structures, develop ethical commitments, ensure strategic vision, assess AI impacts consistently and manage third-party risks.

In this environment, life sciences stakeholders will be keen to connect AI risks to trustworthiness principles along the end-to-end AI lifecycle. Successful operationalization of AI risk management requires alignment with enterprise risk management programs across domains such as governance, culture, methodology, processes and technology.

3. Technology

Finding the right balance between transparency and complexity in AI modeling is crucial for organizations seeking to leverage the benefits of AI. Incorporating approaches that enhance interpretability and identifying potential bias helps mitigate the challenges associated with black box models and fosters understanding and trust in AI systems.

Developers will need to prioritize interpretable features and utilize post-hoc analysis techniques for evaluating black box model behavior. They should also ensure that they document algorithms appropriately to increase transparency and build stakeholder confidence.

Summary

As AI comes of age and regulation catches up, organizations must make informed decisions that strike a balance between transparency, complexity and predictive power. Getting it right empowers organizations to enhance the reliability of their AI systems and optimize their performance while avoiding far-reaching risks of non-compliance.

Acknowledgements

We would like to thank Sharon Kaufman, Michael Imhof, Michael Graf, Iuliia Metitieri, David Sütterlin, Marco Pizziol, Oliver Mohajeri, Aljoscha Gruler, Esther van Laarhoven-Smits for a significant contribution to preparing and shaping this publication.

About this article

Authors

Related articles

Driving Innovation in MedTech: The Power of Circularity and Sustainable Product Design

Sustainability and circularity are fueling innovation in the MedTech industry, transforming product design and creating new value propositions. Explore circularity pilots in MedTech and the need for regulations to facilitate product take-back initiatives. Discover circular business models that align with hospital budgets and are shaping the future of MedTech and healthcare.

How do you balance technological progress and cyber risk in MedTech

To benefit and protect patients in today’s healthcare landscape, we need cutting-edge technologies coupled with robust security frameworks.

How can CIOs successfully strike a balance between innovation & cost?

We offer our clients advice on how to tackle both dimensions in order to be well positioned and thrive in the digital economy.