As Space Tech takes off, one of the most important aspects to address is ensuring ever-more powerful AI applications emanating from it are transparent and trustworthy.
It’s already in most corners of our everyday life, but many people are still either unaware of what Artificial Intelligence (AI) is, don’t understand what it does or have never even heard of it.
EY’s latest Future Consumer Index (FCI) research, which surveyed over 1500 Australians and New Zealanders in December 2021, found that 50% of the respondents had some understanding of what AI is, and 15% said they had a good understanding of what it is. But 28% said they had no understanding of what AI is even though they’d heard of it, and 7% said they’d never even heard of it.
Asked about their use of several services or devices over the past 12 months, it became clear that most of the respondents were being touched by AI, whether they knew it or not. Subscription streaming was used by 54%, navigation applications by 52% and food delivery platforms by 33%. Online grocery shopping was used by 29%, digital wallets by 25%, 20% used facial recognition on their smartphones and 15% said they used a wearable fitness tracker.
AI is embedded in all of the above and a multitude of other services and devices consumers use daily, mostly without even considering the algorithms running in the background.
As the use of AI and Machine Learning (ML) grows and more and more services are rolled out running on these powerful algorithms, it’s crucial that organisations understand that there’s confusion and distrust among their customers. They must work out how to best turn that sentiment around, especially for the 35% who responded that they really don’t know what AI is at all.
Trust must be a high priority as AI goes into orbit
Space Tech is pushing the power of AI. Earth Observation tools, which use high-resolution geospatial images of our planet collected by satellites, and analyse the resulting data using AI and ML. These tools have the potential to do an enormous amount of good. They will let us remotely monitor bushfires and – equally importantly – fuel loads ahead of summer to aid vegetation management. Energy companies can check operating assets across the roughest terrain without having to fly a drone, much less send a human crew into danger. Urban infrastructure such as water pipes can be continuously checked for leaks, stadiums, bridges and buildings can be monitored for crack alerts.
It’s easy to see the positives and how Space Tech will help create a better life on Earth. Legitimate privacy concerns are running on a parallel track and must be paid attention to.
Our FCI research looked deeper into consumer perceptions of AI to better understand their most pressing concerns. Most relate to AI that’s already well embedded, as opposed to the emerging Space Tech tools. Realising that Space Tech will be regarded as “all-seeing” technology, it’s even more important for companies to use that as an opportunity to get on the front foot of ensuring trust and transparency.
Asked about concerns around the use of AI, 61% or more respondents said they were extremely concerned or very concerned about the following areas: companies selling personal information, unsolicited phone calls, ID theft, data security, the threat of private information being made public and unauthorised monitoring of information. The bulk of the remaining respondents was only somewhat concerned, with those who were not very or not at all concerned well in the minority.
Conversely, when asked about their comfort with the use of AI, respondents were mostly either comfortable or neutral about applications such as enhancing community safety, detecting crime, improving the movement of citizens and even improving purchase experiences.
Why elevating trust in AI is already a competitive differentiator
Regulation on the use of AI technologies is coming. While we have regulations that govern data privacy, there are currently few that deal with the issues of bias and fairness. There’s no policy framework to provide guidance on how an algorithm arrives at a certain outcome, and how an individual might be able to challenge that if the output of the AI is impacting you negatively in some way.
For example, with facial recognition technology already being used as a policing tool in sports stadiums, is it skewed by racial-profiling bias or even simply wrongly flagging someone as having a criminal record because they have similar facial features to someone who does?
It’s easy to see how such AI systems could cause harm, and how quickly public distrust in them could escalate.
In Europe, there are already draft regulations to address ‘high-risk forms of AI’ and in time, AI will be regulated across jurisdictions. This is an opportune moment for companies to be first movers on elevating trust and transparency in AI and use it as a competitive differentiator, rather than approaching it as a compliance activity.
EY’s Trusted AI Framework is helping our clients understand the slate of new and expanded risks that may undermine trust only in these systems but also in products, brands and reputation. We conduct AI Risk Assessments so that our clients can better understand how their AI-driven tools will be received by the public, and where the danger areas might lie.
It’s about balance. As consumers, we are open to ways that AI can help us reach positive societal outcomes, but not at the cost of privacy, safety and agency. We know that regulation has lagged behind the technology’s development. Now we are addressing that and at the same time working to ensure that we build, not diminish, trust in the AI tools that are coming with Space Tech.