Businessman sitting at office workstation working on project on computer in office

Six steps to confidently manage data privacy in the age of AI

Organizations need to address increased privacy and regulatory concerns raised by AI.


In brief

  • With growing use of AI, there is also the potential for increased data privacy risk.
  • Clear and consistent regulations on data use in AI have yet to be developed.
  • Organizations should proactively take steps to maintain data privacy commitments and obligations as they use AI.

The speed with which artificial intelligence (AI), generative AI (GenAI) and large language models (LLMs) are being adopted is producing an increase in risks and unintended consequences relating to data privacy and data ethics. 

New LLMs are processing vast swathes of data – often taken from many sources without permission. This is causing understandable concerns over citizens’ privacy rights, as well as the potential for AI to make biased decisions about loans, job applications, dating sites and even criminal cases. 

Things are moving quickly and many regulatory authorities are just starting to develop frameworks to maximize AI’s benefits to society while mitigating its risks. These frameworks need to be resilient, transparent and equitable. While the EU has taken the most comprehensive approach with new and anticipated legislation on AI, efforts to understand and agree how AI should be regulated have been largely uncoordinated. So it’s little surprise that leading industry figures are calling on governments to step up and play a greater role in regulating the use of AI.1 To provide a snapshot of the evolving regulatory landscape, the EY organization has analyzed the regulatory approaches of eight jurisdictions: Canada, China, the European Union (EU), Japan, Korea, Singapore, the United Kingdom (UK) and the United States (US).

Ideally, businesses will develop adaptive strategies tailored to the rapidly changing AI environment; however, this may be difficult as many businesses are at the early stages of AI maturity. This creates a challenging situation for businesses wanting to progress but also needing to maintain regulatory compliance and customer confidence in how the business is handling data. “There’s tension between being first versus part of the pack. Organizations should implement an agile controls framework that allows innovation but protects the organization and its customers as regulations evolve,” notes Gita Shivarattan, UK Head of Data Protection Law Services, Ernst & Young LLP. In this article, we look at six key steps data privacy officers can take to help organizations stay true to their priorities and obligations around data privacy and ethics as they deploy new technologies like AI.

There’s tension between being first versus part of the pack. Organizations should implement an agile controls framework that allows innovation but protects the organization and its customers.

There are six steps data privacy officers can take to help your organization become more agile in protecting its priorities and obligations around data privacy as it expands its use of AI.

1. Get your privacy risk compliance story in order

Assess the maturity of your privacy risk controls and overall privacy compliance to create a strong foundation for AI governance. It’s crucial to articulate a compelling compliance story to your own people and regulators. Modern privacy laws are being enacted in different jurisdictions at pace, but most (if not all) modern privacy laws contain a big dose of the General Data Protection Regulation (GDPR). So, when looking at privacy risk and ethics, make sure to “leverage the lessons learned from GDPR,” advises Matt Whalley, Partner, Law, Ernst & Young LLP. “Ensure you document relevant elements of your decision-making in case you are asked to justify this in the future.” Ultimately, use your story to show how compliance builds customer confidence, avoids reputational damage and financial penalties while benefitting the top line by enabling AI innovation and data-driven decision-making while managing risk.

Leverage the lessons learned from GDPR. Ensure you document relevant elements of your decision-making in case you are asked to justify this in the future.

2.  Set up risk controls, governance and accountability

Risk controls and governance frameworks can help organizations build confidence in their AI applications in the absence of clear regulation. Yet, according to a 2022 EY study, only 35% of organizations have an enterprise-wide governance strategy for AI.

A robust AI governance program should cover the data, model, process and output while striking a balance between innovation and responsibility. It should enable your product development teams to experiment without stepping into high-risk areas that could put regulators on notice and damage customer confidence. Also, “AI models should be transparent,” notes Shivarattan, “so that regulators and citizens can see where data comes from and how it’s processed, to assure privacy and avoid bias.” Most important, governance frameworks should ensure responsibility and accountability for AI systems and their outcomes by:

  • Establishing clear procedures for an acceptable use of AI. 
  • Educating all stakeholders responsible for driving and managing the use of data.
  • Keeping auditable records of any decisions relating to AI, including privacy impact assessments and data protection impact assessments.  
  • Defining a procedure to manage bias if it is detected.
AI models should be transparent, so that regulators and citizens can see where data comes from and how it’s processed, to assure privacy and avoid bias.

3.  Operationalize data ethics 

There is a clear interlock between data ethics, data privacy and responsible AI. “Data ethics compels organizations to look beyond legal permissibility and commercial strategy in the ‘can we, should we’ decisions about data use,” according to Whalley.

After reviewing existing policies and operating models, identifying the key principles and policies to follow is one of the first steps organizations should take in operationalizing data ethics. Technology can be used to embed these principles and policies into front-line decision-making to help ensure they are considered together with regulatory obligations.

The principles may originate from within the organization itself as an extension of pre-existing values or employee sentiment. For example, to define an acceptable use of AI, you could follow similar steps to assessing what is a reasonable use of personal data by determining whether there is a legitimate interest, whether the benefits of the outcome outweigh the individual right to autonomy and whether you have given sufficient weight to the potential for individuals to suffer (unexpected) negative outcomes. 

The principles may also arise from other sources including third-party organizations or customer sentiment.  For example, in the absence of clear regulatory direction, some organizations with advanced AI models are taking steps to identify the standards that could apply across industries.2

Data ethics compels organizations to look beyond legal permissibility and commercial strategy in the ‘can we, should we’ decisions about data use.

The best path forward may not always be clear, so educating stakeholders on the policies and principles is critical, and a “trade-off framework” should be developed to help work through conflicts.

Lastly, having line of sight into data use in connection with AI across the organization is critical for monitoring data ethics compliance. Since data privacy concerns extend to suppliers and other third parties, they should also be contractually required to disclose when AI is used in any solutions or services.

4.  Report data privacy and ethics risks at board level

Stakeholders will need to work together to help the board understand and mitigate the risks associated with AI and make strategic decisions within an overarching ethical framework. Responsibility is often divided between the Data Protection Officer (DPO) or Chief Privacy Officer (CPO) – who will possibly have responsibility for data ethics – and the Chief Data Officer (CDO). Some organizations may want to go further and appoint a Chief AI Officer (CAIO). Together, these senior leaders will need to help ensure the right checks and balances are in place around the ethical uses of data in AI.

5. Expand horizon scanning to include customer sentiment

In April 2023, Italy became the first Western country to (temporarily) block an advanced GenAI chatbot amidst concerns over the mass collection and storage of personal data.3 Japan’s privacy watchdog also spoke out, warning it not to collect sensitive data without people's permission.4

Such actions can, at a stroke, destroy the value of investments in AI. Systematic forward-looking analysis or horizon scanning is vital to reduce the uncertainty of regulatory change and help avoid unexpected developments. But it’s not just about regulations – companies also need to stay in touch with what customers are thinking about AI usage and data privacy. Stay ahead of regulators by talking to your customers regularly to understand acceptable limits and “no-go” areas. 

6. Invest in compliance and training

In a relatively short time, interest in using AI has multiplied, putting pressure on employees across organizations to understand the implications of its use and the impact on data privacy. Many organizations may have to hire additional specialists as well as train and upskill existing compliance teams, combining on-the-job with theoretical studies.

It’s especially important to train employees in AI-facing roles such as developers, reviewers and data scientists, helping them understand the limitations of AI, where AI is prone to error, appropriate ethics and how to complement AI with intervention from a person. In addition to operational guidance on implementing AI controls, you will need to create a mindset that balances innovation with an appreciation of data privacy and ethics.

EY member firms do not practice law where not permitted by local law or regulation.


Summary

The acceleration in adoption of AI is increasing the risk of non-compliance with data privacy regulations and the likelihood that your organization could inadvertently damage customer confidence. By taking steps to help assure data privacy and responsible deployment in the use of AI, organizations can move forward with innovation, win the confidence of customers and maintain compliance.

About this article