AI and Data - Cyber AI

How CISOs can nurture a cyber-informed workforce in the age of AI

CISOs need to strike a balance between achieving optimal security and unlocking the value that AI adoption can bring to the organisation.


In brief

  • Human error and poor adoption of best practices represent the biggest single cyber risk in relation to AI adoption.
  • Organisations need to adopt a three stranded strategy covering technology, governance and operations to address new risks.
  • All aspects of the data management lifecycle must be viewed through a cybersecurity lens.

Organisations are beginning to implement artificial intelligence (AI) solutions at scale and the enterprise software they use is increasingly AI powered. The aim is to increase efficiency, productivity and creativity, but the technology brings significant additional cyber risks.

Businesses are already encountering AI related cyber problems with reports of staff disclosing sensitive commercial information and intellectual property to AI models not uncommon. This type of problem goes hand in hand with the technology, unfortunately. AI with its natural language user interface puts advanced analytics tools in the hands of non-tech staff across all functions in organisations.

At the same time, the amount of corporate data being given to chatbots by employees rose nearly fivefold in the 12 months to March 2024, according to a study of three million workers carried out by SC Magazine. This puts organisations at much higher risk of data leakage.

That risk is heightened by weak adherence to cybersecurity protocols. According to the EY 2023 Global Cybersecurity Leadership Insights Study, 64% of Chief Information Security Officers (CISOs) were not satisfied with their non-IT colleagues’ adoption of cybersecurity best practice. Indeed, this was cited as the third-biggest internal cybersecurity challenge while human error continued to be a major cyberattack vector.

Focus on technology, governance, operations imperative

While cybersecurity is of paramount importance, it must not be allowed to become a barrier to AI adoption. The cybersecurity function, therefore, needs to adopt new approaches to support the safe acceleration of adoption. The function also needs to nurture a cyber-secure workforce and will, therefore, need visibility into how AI tools are being used across the business. This will require a focus on the three key areas of technology, governance and operations.

In terms of technology, solutions that enable cyber teams to detect when certain AI services are being used, track data flow and automate compliance are becoming available. Other tools monitor data already in an organisation’s network for documents that are being uploaded or prompts used in a ChatGPT function.

On governance, policy should focus on threat modelling from the outset. This will allow organisations to identify and quantify risk and, therefore, inform the design of appropriate controls. Organisations should also define the procedures for ensuring data protection and privacy in the development of AI models. They must also be accountable for the outputs of the models in use.

Threat evaluation must be accompanied by continuous data verification, classification and tagging. Our research has found that some organisations have as little as 20% of their data tagged or classified. All businesses need to prioritise tagging and verification for their most critical and sensitive data to ensure that they have the right safeguards for issues such as identity, data flow and data access.

Governance should be led from the top and consideration should be given to placing an AI expert who should have the power to guide the governance model across the organisation.

Cyber guidance, new processes need of the hour

While some high-profile AI hacking attempts like deepfake CFO bank transfer requests tend to dominate the headlines, employee error remains the single greatest vulnerability for most organisations. AI itself represents a new threat vector, however. The technology gives employees the opportunity to query and extract value from more data to which they may previously not had access. This requires controls that prevent unauthorised or unintentional disclosure of sensitive information. In addition, it should be made easy for all staff to obtain appropriate cyber guidance.

That guidance could be provided by sophisticated AI powered chatbots that can advise employees on questions about sensitive or restricted data. This will alleviate the burden on cyber teams at the same time as reducing the necessity for employees to navigate what could be lengthy and complex policy documents.


Organisations cannot rely on training and technology alone if they are to be cyber-secure in the AI era. The establishment of new processes will be required to facilitate AI adoption at the same time as preventing cyber risk being spread as a result. Those process should not become a hindrance to AI adoption, however. Instead, they should be designed with the intention of promoting the safe and responsible use of AI.


The overall approach to data utilisation needs to change as well. A cybersecurity lens must be applied to the entire data management lifecycle. This will require Chief Data Officers (CDOs) to focus on data governance, quality and privacy at the same time as delivering value from the data.

The skills profile of the cyber team will need to change. A narrow highly technical skillset will no longer be sufficient in a situation where AI is being used in different ways for widely varying purposes in different areas of the business. The optimal approach will be to build a team that balances a combination of broad disciplines with the understanding that each brings its own strengths and weaknesses.

Given the scale and evolving nature of the challenge faced, it may be necessary for organisations to establish Centres of Excellence (CoEs) to address it. CoEs can provide strategic oversight and coordination for the secure deployment of AI. They can streamline and simplify governance requirements, processes, and data flows to facilitate faster deployment.

CoEs can play an important role in upholding good governance and best practices through skill sharing and the development of consistent protocols. They enable centralised governance and monitoring and support a consistent approach to AI use, offering better understanding of AI use cases and data assets.

Key takeaways for CISOs

Employee education and training is critically important for the safe and responsible adoption of AI. AI can be used to improve the quality and relevance of training and advice. It can also make cybersecurity policies and procedures easier to understand and access through the use of AI powered chatbots for querying cyber policies.

Threat modelling and evaluation need to be comprehensive and continuous. The threat landscape for organisations is widening to include third and fourth-party AI services in the supply chain. Organisations must, therefore, identify all AI solutions in use both to understand risk and compliance exposures as well as to guide the development of controls to manage widespread AI usage. They must also invest in rigorous data classification to ensure AI models do not access restricted data and valuable intellectual property. This modelling and classification effort needs to be continuous as AI solutions constantly seek to access new and underused data sets.

Summary 

Organisations need to take a multi-faceted approach if they are to address the heightened cyber risks brought about by widespread AI adoption. Training, governance and operational policies must evolve to address the new reality of AI, while technology solutions must be deployed to support cyber teams and employees in dealing with the constantly evolving threat landscape. AI is accelerating and change won’t wait.

Related articles

How cyber leaders in Ireland can accelerate AI adoption

AI has the potential to simultaneously enhance an organisation’s cyber defences whilst increasing its attack surface and exposure to attack. Find out how.

How autonomous finance can be a game changer as reporting burden rises

The major technology outage highlights the world's critical reliance on IT services and underscores the need for business continuity planning. Find out how.

    AI case studies

    Discover how AI technology can ignite innovation, unlock efficiencies and transform businesses across industries.

    Shape your AI Future Today

    Get in touch to learn more about our holistic approach to AI.


    Artificial Intelligence - The Future Won't Wait

    EY.ai help clients to navigate the complexities of the modern business landscape and achieve their strategic objectives through AI-enabled business transformations.

    AI faces

    About this article