ey-trusted-ai-in-government

How to reimagine a trusted framework for AI in the public sector


AI and ADM-enabled digital services can deliver better outcomes for people and the public sector – but only if we can close the trust gap.


In brief

  • Governments are increasing their exploration of AI and ADM tools to build a more intelligent, cost-effective public sector.
  • A lack of public trust in these technologies may hinder their potential to deliver better outcomes for Australians.
  • Three focus areas can help build a risk-based regulatory framework that guides the human-centred deployment of AI and ADM in the public sector.

The use of artificial intelligence (AI) and Automated Decision Making (ADM) is growing fast. Within the public sector, AI and ADM are becoming increasingly important tools, helping governments use data and analytics to tackle some of their most complex problems, including, for example, tracking the spread of infectious diseases like COVID-19, identifying trends in tax fraud and making better decisions about allocating resources. They can also enable more intelligent digital public sector services, supporting governments to meet growing demand for services with tighter budgets.

All of us have probably used some of these AI-enabled services. If you’ve asked a question of a chatbot on a government web page, used the ATO’s self-service portal to submit your tax return or applied for support after the recent floods, you’ve used AI. These services aim to make accessing support much easier and quicker for citizens, while lifting the burden off government call centres and staff.

But they are only the tip of the AI and ADM iceberg. AI offers governments an opportunity to completely rethink how they offer services to citizens. By using data and analytics, AI and ADM can connect information from across government, helping build a public sector that offers personalised, seamless services across life’s journey. For example, providing information on claiming family tax benefits to someone who’s just welcomed a new child or support on applying for the age pension as a person nears retirement. The New South Wales government has restructured its website this way, presenting services through a human-focused “life events” lens, that puts the citizen at the centre of the design, rather than the traditional departmentalised and siloed approach.

We expect this human-centred approach to delivering public sector services to increase, as metrics around government success broaden to consider government’s support for wellbeing. AI and ADM can be a critical enabler of this, helping governments build services that deliver positive outcomes for people, as well as for policy and budgets.
 

Closing the AI and ADM trust gap

But despite its potential, the use of AI and ADM in government still lags that seen in the private sector. A recent EY survey commissioned by Microsoft found that AI is a digital priority for government – but only 4% of agencies surveyed had managed to scale its use to achieve real transformation in services.

Several barriers are likely to be behind the slow uptake. Cultural resistance may be one, as well as the effort, skills and resources required to implement technology in a sustainable way that adopts best practices. Security concerns may also be of concern, and the lack of skilled talent is certainly a challenge. Around 70% of Australian public service agencies face a shortfall in data and digital skills.

But perhaps the greatest obstacle preventing more AI and ADM in government is a trust deficit. Seventy percent of Australians aren’t comfortable with governments using their personal data, and 41% don’t want government departments to share their data amongst themselves.

Unfortunately, some public sector digitisation programs have focused on technology, without considering its impact on humans, leaving people feeling disconnected and disenfranchised from the services they need. When digitally-driven decisions are not transparent – or appear to prioritise cost savings over citizens – it’s not surprising that trust is in short supply.

The issue of trust is also apparent within government itself. Examples such as the Federal Government’s Online Compliance Intervention scheme – better known as Robodebt - highlight the consequences of not implementing AI and ADM in the right way. They have created a degree of wariness in some parts of the public sector about the greater use of these technologies, even when their potential to improve operations and services is acknowledged.


A trusted framework for AI and ADM in government

If AI and ADM are to fulfil their potential in government, we need to urgently consider how to build a trusted framework that allows for its adoption at scale, mitigates the risk of misuse and still encourages innovation within the public and private sector. We believe that this framework should include three key elements:

1. Focus on human outcomes

Technology should always serve people, not the other way around. AI/ADM have the potential to power better, more cost-effective services, achieve policy objectives and assist staff to make better, more informed decisions – but only if citizens and public sector teams trust its ability to have a positive impact on people’s lives and society more broadly.

A fundamental rethink of how digital services are developed is required. Instead of talking about the “end users” of AI/ADM-powered digital services, putting humans at the centre of their design and deployment can help governments achieve people-focused outcomes, and mitigate the risk of harm. This will also require working closely across government departments, and with the private sector, universities and non-profits to understand what citizens really want from government services, and then considering how harnessing technology can deliver this in an inclusive and equitable way.

2. Clear risk-based regulation and governance

Much of the mistrust in AI/ADM stem from a lack of clarity around its permitted use. The fast growth of this technology has left regulators scrambling to keep up. Many countries initially hoped for self-regulation or chose to rely on existing legislation, regulations and case law as potential vehicles to govern AI use or at least protect individuals who might be subjected to its outcomes. But while elements of existing laws may touch on some of the risks arising where AI or ADM is deployed, they often will not cover AI/ADM-specific outcomes.

A consistent national regulatory framework can build trust in AI/ADM, and deliver the clarity required to accelerate investment and innovation, both within the public sector and industry. As outlined in the EY-Trilateral Research report, A survey of artificial intelligence risk assessment methodologies – the global state of play, while we currently see a diversity of approaches to AI governance around the world, more jurisdictions, including the EU, are moving to risk-based regulation.

In a risk-based approach, the burden of compliance is proportionate to the risk posed by the technology. It’s a way of balancing the need to mitigate potential misuse while still encouraging the innovation that will unlock more benefits for the public sector and citizens. It should include clear guidance on how AI risks will be assessed so that organizations can determine whether their intended application could potentially be considered high risk, and invest time and resources accordingly.

Governance is at least as important as legislation. The establishment of a designated regulatory agency – appropriately funded and staffed – is an important step in increasing market confidence in AI and ADM adoption. While some have argued for the establishment of non-regulatory, “centres of excellence” to guide the use of AI/ADM, a central body with the authority to introduce binding market regulation would significantly increase certainty in the market for AI/ADM adoption and innovation.

3. Assurance that instills confidence

Assurance is all about building confidence and trust. It goes hand in hand with correctly operationalising regulatory obligations and validating technology-driven outcomes. Just like auditing in other industries, a robust AI assurance framework that checks and verifies systems and processes and allows for decisions to be traced and explained (and challenged if necessary) can build trust in AI, guard against bias in models and provide the level of confidence needed to broaden its use.

Several countries are considering how to develop AI assurance frameworks. AI assurance is a major priority for the UK government, as outlined in the UK’s National AI Strategy which sets out an ambition to be “the most trusted and pro-innovation system for AI governance in the world”. The UK has developed an AI Assurance Roadmap, which includes recommendations around developing common standards and techniques, building a dedicated AI assurance profession and improving links between industry and researchers.

Australia is well placed to take a leadership position in shaping the future of AI assurance, which will be of increasing importance to economic competition and geopolitical security. Just as we have taken a leading role on the international stage in the creation of standards in domains such as blockchain and cybersecurity, Australia can also influence the development of a mature AI assurance framework.
 

Three actions government can take now to address the AI trust deficit

The ability of AI and ADM to augment our existing processes and systems to deliver simple, smart, digital support can help completely reframe how public sector services are delivered – moving away from a departmentalised approach to one that is connected, personalised and people-centred, delivering greater benefits for government and people. If Australia is to truly reach its ambition to be a top-10 digital nation by 2030, we urgently need to accelerate this potential, through adoption of a consistent nationwide risk-based regulatory framework, underpinned by robust assurance and focused on positive human impacts. Not only will this build trust in AI and ADM systems, it will help allay fears that technology will dehumanise the public sector and create confidence in its ability to do just the opposite – create an effective digital government delivering better outcomes for citizens. We believe that taking three steps now can help Australia’s government achieve this:

  1. Set an example for the private sector: Understand what AI and ADM technologies are currently in use today within government, and assess whether they are truly serving the needs of citizens.
  2. Establish of a central regulatory agency with authority to introduce binding market regulation for AI and ADM.
  3. Introduce national risk-based regulation for AI and ADM, underpinned by a clear assurance framework.


Summary

Most governments use some AI- or ADM-enabled tools but there is huge untapped potential to expand their use to reimagine service delivery. While barriers to adoption vary, one of the biggest challenges may be the need to build trust with a public wary of sharing data with government. A regulatory framework built around three focus areas can help governments deploy trusted, human-centred AI that delivers better outcomes.


About this article

Authors

Related articles