Futuristic robot arm touches human hand

The Board Imperative series

How we can ensure that AI benefits everyone


Want to create a better working world through AI? Then don’t forget the people it’ll displace.


In brief

  • The ethics of the consequences of AI’s use is something that is yet to be determined.
  • Looking at the communities that are being displaced by AI advancements will help ensure that no one is left behind.

As a board member in a company that’s investing in AI – and what business isn’t these days? – you’ve probably found that the conversation has focused on some key questions. Such as, does your organization’s use of AI really mirror the corporate values you espouse? Are the outcomes for employees, customers and other people affected by it aligned with those values? And do you have a common set of ethical principles to guide usage of AI across the business?

In this short film, John Thompson, Chair of the Board at Microsoft observes that the number one issue that he and his board counterparts have to deal with is the ethical and responsible use of AI, not just AI as a technology.

In most cases, they’re doing this through two actions. First, applying robust principles and governance to ensure that the impacts on the people directly subjected to it are fair and trustworthy. And second, reviewing AI initiatives continuously for unintended outcomes and potential transformational business improvements. The targeted outcome? To successfully balance the financial ROI from AI with enhancing the business’s societal license to operate.

 

For the board, the agenda behind these actions is clear. Get AI right – with appropriate safeguards and governance in place – and you’ll open up a new path to sustained value. Get it wrong, and the downside risks could be existential.

 

So, with those safeguards in place, that’s AI ethics sorted, right? Well, to a point. While AI’s most immediate risks are being addressed, there’s still an elephant sitting quietly in the corner of the boardroom whenever AI is being discussed.

 

It’s the ethics of the consequences, whether intended or unintended, for those individuals and communities – often far removed from the sources of AI innovation – whose livelihoods will be displaced.

 

One can hear the counterargument already: that AI will create new and better jobs. Jobs that are higher value than the generally lower-value positions displaced. All of that may well be true. But who’ll benefit from these exciting new employment opportunities? Probably not the same people who’ve been displaced.

 

This matters. The EY vision – one that we know is shared by many of EY clients, especially in the tech industry – is to build a better working world. And not just for machines, but for people. For this aspiration to become reality, people still need to be working. Yet alongside the billions of dollars being invested globally in AI, hardly anything is going into working out how we’ll support and create productive and fulfilling lives for many of the people whose roles it’ll replace.

 

For the first time in history, the people doing the displacing are so far removed from those being displaced as to be virtually unaware of the effects on them. And given the accelerating pace of change, those impacts are only going to increase.

Take coal mining. No rational person would oppose clean energy. But what will happen to those communities when the AI solutions for lower-carbon generation have replaced them. Put a human face on to the indirect impacts of breakthroughs in energy tech. Or take the trucking industry and the entire infrastructure around it. As autonomous electric vehicles take to the roads, what happens to all the people affected – from gas station workers to roadside trucker restaurants and more? Or take retail. As checking out at a store comes to involve simply walking out with the goods, what happens to all the checkout clerks with families to feed?

So, what might be the way forward? The growing focus on ethical AI is a good start. But we need to widen the aperture of the ethical lens. Sure, government might play a role. It could maybe do this through a series of programs like the moon shot of the 1960s, creating a national goal and igniting a passion to solve some of our biggest challenges through new AI platforms. But government can’t do this alone. Especially since it always faces the risk of unintended consequences – since regulation can have a stifling effect if it merely tries to protect the status quo or substitute money for meaning.

Instead, here’s another idea for an approach. How about if the tech industry came together and recognized that our most precious resource – both as an economy and a society – is underutilized human capacity? And not just in terms of intellect and ingenuity, but also creativity and people skills. A public/private partnership focused on using the tremendous benefits of technology to lift up the displaced would not only be a positive way to meet a societal imperative, but also a potential economic energizer offering a huge global return on investment.

If such an effort could get underway, it would create a level of awareness that would resonate across all industries using AI. But, as ever, the hard part is getting started. So, if you’re on the board of a company that’s committed – as most now are – to having a positive impact on society, how do you begin to make a difference?

From a holistic perspective, when a leadership team launches its company’s AI program – whatever the use case – it should make sure it answers three questions in its work plan from day one:

  • What’s the role and responsibility of the company we’re representing toward various groups of stakeholders?
  • Who’s going to be directly impacted by the company’s AI initiatives, within and beyond the organization – and who’s going to be impacted negatively, including indirectly in wider society?
  • How can we collaborate with others in the ecosystem to help and support the communities of people who’ll be negatively impacted?

Answering these questions puts in place three touchstones that encourage a board to think of the potentially displaced people as stakeholders – and enable it to apply the same level of transparency to the impacts on them as it does with other stakeholder groups.

This brings us back to the core purpose of building a better working world – which is about using tech not just to solve an immediate problem, but to generate longer-term and societal value by raising all boats. Which, in turn, means focusing not just on those who’ll benefit from AI, but also those for whom it’ll have the opposite effect. And when it comes to ensuring we use technology to create purposeful lives for all, we’re all in it together - because it’s in all our interest.


Summary

The message? Just because people may be displaced by AI, that doesn’t mean they have to be left behind. The reality is that AI can benefit everyone in society. Let’s work together to make sure it does just that.