This unease is giving rise to questions such as whether the algorithms powering autonomous vehicles keep passengers safe. Will automated loan application decisions be transparent and non-discriminatory? Will facial recognition cameras violate citizens’ privacy or mistakenly target innocent individuals?
Governments and policymakers have been responding to these ethical concerns. Since 2016, more than 100 ethical guidelines to guide the trustworthy development and adoption of AI have been published by governmental bodies, multi-stakeholder groups, academic institutions and private companies. Momentum is now growing to put these principles into practice through regulation or other policy means. In February 2020, the European Commission released a White Paper on AI detailing its comprehensive approach on these issues.
However, coordination between companies and policymakers is critical for the development of pragmatic policy and governance approaches that are informed by constraints and realities on the ground.
EY undertook a global survey of policymakers and companies in late 2019 and early 2020 to establish the current level of alignment between these two key sets of stakeholders. The survey revealed significant gaps between the private sector and policymakers, and these create new market and legal risks. Greater coordination and collaboration is necessary as we move from principles to practice.
The survey covered 12 use case applications of AI such as autonomous vehicles, facial-recognition check-ins or algorithmic recruiting, and 11 ethical principles including “explainability”, “privacy and data rights” or “fairness and non-discrimination”.
Policymakers have achieved consensus on the ethical principles they intend to prioritise, and this is evident in the survey data. When it comes to facial recognition technology, policymakers show a clear ethical vision, rating “fairness and avoiding bias” and “privacy and data rights” as the two most important principles by a wide margin.
On the other hand, companies’ responses were fairly evenly distributed across all ethical principles in an apparently undifferentiated way.
A similar pattern was visible in relation to home virtual voice assistants. Policymakers cited “privacy and data rights” as the top concern by a wide margin while, once again, companies’ responses were fairly undifferentiated across different ethical principles.
A further cause for concern is the fact that companies appear to be focused on the issues such as privacy and cybersecurity associated with existing regulations such as GDPR, rather than on emerging principles that will become critical in the age of AI such as explainability, fairness and non-discrimination.
This focus on currently regulated issues rather than the ethical issues raised by AI may reflect incentives. Companies have a smaller, narrower set of stakeholders than policymakers, and their goal is to maximise revenue and financial value. On the other hand, policymakers have a longer time horizon and a more diverse set of stakeholders. Consequently, policymakers tend to focus more on principles that are socially beneficial and less tangible, such as fairness, human autonomy and explainability.
The misalignment between companies and policymakers is also evident in their expectation about the future direction of governance. Both agree that a multi-stakeholder approach will be required but diverge on what form it should take. While 38% of companies expect the private sector to lead such a framework, only 6% of policymakers agree with two-thirds of them saying an intergovernmental organisation is most likely to lead.
The misalignment exposes companies to a range of new market, reputational, compliance and legal risks.
Consumers are expressing strong concerns specific to different applications of AI. Firms whose products or services don’t address these concerns will have fundamentally misread market demand, and risk losing market share. Furthermore, AI products and services that cause damage to a consumer could erode the company’s brand and reputation due to the power of social media.
If companies are not actively involved in shaping emerging regulations and don’t understand the ethical principles policymakers are prioritising, they risk developing products and services that aren’t designed to comply with future regulatory requirements.
Furthermore, an inability to comply with regulations could open companies to litigation and financial penalties.
To mitigate these risks, it is in companies’ best interest to coordinate with policymakers in developing realistic and effective policy measures and governance frameworks. This will require closer collaboration between the two with policymakers using a more consultative and deliberative approach, with input from the private sector, especially on technical and business complexities.
More fundamentally, the survey results suggest the need for a comprehensive approach to regulation with policymaker-led and company-led components, since both parties have strengths that only partially cover the domain knowledge required.
However, policymakers don’t trust the intentions of companies and this represents a barrier to this approach. Almost six in ten company respondents (59%) agree that “self-regulation by industry is better than government regulation of AI” while a similar share (63%) of policymakers disagree. Furthermore, while 59% of company respondents agree that “companies invest in ethical AI even if it reduces profits”, only 21% of policymakers agree (49% disagree).
Bridging this trust gap will not be easy. One step that would certainly help is for firms to recognise the fears that new applications of AI are generating, which would demonstrate to consumers and policymakers that they are not out of touch with the most important ethical issues.
GDPR was just the beginning. AI will raise a host of new ethical challenges and companies must ask which AI ethical principles are most important in their sector or segment. They should engage with policymakers. If you’re not at the table, you’re on the menu. Policymakers are ready to move ahead, but without industry input, blind spots could lead to unrealistic or onerous regulations.
If companies want to lead on AI innovation, they need to lead on AI ethics as well. Firms need corporate codes of conduct for AI with real teeth and they must be aligned with the ethical principles prioritised by consumers and policymakers.