Some AI use cases demand explainability, such as a machine learning tool that is used to detect cancer from medical imaging data, or one that is used to automate loan decisions. In addition to generating an output — the likelihood of cancer or the creditworthiness of a client — we also need to understand what factors contribute to this output to enable informed, auditable decisions. If system designers have used only black-box techniques in the development of such systems, then they may not be able to explain to users, regulators, broader society, or even themselves the factors that have contributed to AI output. In addition to posing potential reputational damage, early efforts by governments around the world to regulate AI — for example, via the EU Artificial Intelligence Act — may expose companies to economic and legal risks in the future if considerations, such as explainability, are not acknowledged in the design, development and maintenance of AI systems.¹⁵
Although enabling explainable AI in a classical-computing environment is not easy, achieving explainable quantum-AI models could be even more difficult. Initial research has noted that the underlying physical properties of quantum computers (for example, superposition and entanglement) make it impossible to audit quantum computations directly, or replicate them with classical computers.¹⁶ This facet of quantum computing means that AI use cases that require an explanation may not be appropriate applications for quantum computing — at least, until supplementary quantum explanation tools are developed, widely available and credible.
Similarly, there is reason to believe that some fairness metrics and techniques used in traditional machine learning may no longer be applicable after a quantum transition due to the opaque nature of quantum states.¹⁷; ¹⁸Increased research attention and investment into the intersections of fairness, explainability, and quantum are required to get a comprehensive view of the limitations of quantum machine learning and safeguard communities against algorithmic harms.
Boards, governments and consumers are more concerned, and aware of the risks associated with poorly designed and mismanaged technology than in previous decades.¹⁹ To meet the increasing societal demands for trustworthy technology, the future of quantum technologies should be informed by lessons learned from previous digital ethics failures (for example, from instances where individuals’ data was misappropriated or misused, where algorithms were employed to amplify misinformation and hateful content online, and where marginalised groups faced added discrimination through the use of unrepresentative data sets and unfair models).
Business leaders will need to liaise with their technology teams and engage in rigorous ethical risk assessments to ensure that quantum machine learning and broader quantum road maps are feasible and proportionate to business needs. Furthermore, ongoing research, and collaborations with the fields of digital ethics and RRI, are necessary to understand better the technical and non-technical ways that quantum can align with existing ethical norms and principles. Companies, such as Google, have already recognised the synergies of AI ethics and quantum computing ethics — having announced in 2019 that its AI-ethics principles will be applied to their quantum programme to mitigate misuse.²⁰ All leadership teams will need to boost their focus on ethics and trust, which will mean ensuring that approaches to data selection, modelling, delivery, and monitoring are enhanced and subject to additional scrutiny.
Finally, the adage ‘just because we can does not mean we should’ rings true when constructing ethical guidance fit for the quantum era. Quantum technologies — once sufficiently mature — may enable previously unfeasible or intractable processes to be enacted with relative ease. However, in unlocking new computational frontiers, there is no guarantee that quantum technologies will only be used in the best interests of society and the planet. For example, there is a concern that actors may develop materials that support unethical practices, such as the creation of cheaper — but more environmentally harmful chemicals — by harnessing quantum’s augmented ability to simulate molecules. The same concern is present for applications of quantum computing towards combinatorial optimisation problems, if individuals elect to optimise solely for cost savings to the detriment of social or environmental good.
Whilst this article has primarily focused on the possible downstream implications of quantum computing, other quantum technologies — in areas ranging from sensing to communication — must also be subjected to rigorous ethical scrutiny to ensure that projects are being initiated and maintained in accordance with the aims of RRI. For example, let us consider quantum sensors, which leverage quantum mechanical properties to obtain more sensitive, holistic and accurate measurements. Such sensors could provide us with richer data required to optimise traffic or detect diseases. However, they may also provide actors with even greater means to engage in fine-grained, covert surveillance. If applied unethically, quantum-sensing technologies could be used to compromise individuals’ privacy and — in more extreme cases — suppress dissent and degrade autonomy.