How cyber leaders in Ireland can accelerate AI adoption

How cyber leaders in Ireland can accelerate AI adoption

AI has the potential to simultaneously enhance an organisation’s cyber defences whilst increasing its attack surface and exposure to attack.


In brief

  • Cyber teams are leveraging AI to be more effective with the same resources and stay ahead of evolving threats.
  • The pace of AI adoption along with uncontrolled use and experimentation has the potential to expose organisations to data and cyber breaches.
  • Clear usage principles and guardrails along with appropriate security measures and frameworks will enable organisations to adopt AI with confidence.

Advances in artificial intelligence (AI) and generative AI (GenAI) and their pace of adoption across business functions present opportunities as well as challenges for cybersecurity leaders. On the one hand, AI has the potential to enhance existing cyber defence capabilities while on the other its increased and often uncontrolled use across the organisation expands the attack surface and heightens the risk of data breaches.

By availing of the opportunities cyber leaders can not only improve their organisations’ cyber defences they can also assist in the adoption of AI across the business in a safe and secure manner. However, the 2024 EY Global Cybersecurity Leadership Insights Study found that cyber leaders are proactively embracing AI in the cyber function but, to a large extent, are not yet helping other business functions embed cyber measures into their AI models.

If they can fill this gap, it will help them drive value across the organisation through safer, more widespread adoption of AI. It also offers cyber teams a chance to reposition cybersecurity to become a true enabler of technology transformation.

AI can help fortify cybersecurity measures

AI in cybersecurity is not new and can be traced back to the 1980s. EY analysis highlights a sharp rise in AI-related cyber research, patents and investment over the past decade and AI is now part of 59% of all cyber patents and is the top technology explored in cyber research since 2017.

Organisations with the most advanced cyber capabilities are integrating AI into their detection, response and recovery processes in new ways, allowing them to stay ahead of attackers, who themselves are using AI attack methods unencumbered by regulations or use policies. AI helps cyber teams be more effective with the same or fewer resources, presenting an opportunity to satisfy the CFO by doing more with less. EY analysis points to efficiency gains from the use of AI in cyber defence that can range from 15% to 40%.

At present, organisations are using AI in cybersecurity primarily for detection, response and recovery. For instance, novel neural detectors can enhance network intrusion detection while natural language processing (NLP) and ML can automatically generate cyber threat intelligence (CTI) records. Applications range from detecting human error within the organisation; real-time cyber-threat assessment for smart energy grids; and protecting implantable medical devices from potentially fatal malicious attacks.

AI in cybersecurity is a double-edged sword. Where it empowers organisations with enhanced security capabilities, it equips cybercriminals with similar tools. It enables individuals lacking advanced coding skills to leverage GenAI and create malicious code efficiently. Using existing cybersecurity measures to protect AI systems and applying rigorous due diligence to the purchase of such systems will help deal with the heightened threat as will increased awareness of the new environment¹. 

AI will also have a profound impact on cyber talent retention. It will allow employees to focus on more engaging and value-adding work, and to increase their productivity.

AI has the potential to change the nature of cyber teams through a shift from technical cyber practitioners to AI operators and so-called “fine tuners.” Individuals with prompt engineering skills, enabled by the right technology and an AI interface, will be able to do the work that currently requires several penetration testers.

Cybersecurity in the AI adoption journey

Rapid adoption of AI can leave organisations vulnerable to new cyberattacks and compliance risks. Bad actors are already targeting vulnerabilities in AI systems while employees can breach compliance or regulations while using AI, such as by inadvertently exposing sensitive data, intellectual property or restricted material to AI models. This offers the cyber function the opportunity to become a key enabler of AI adoption.

The importance of security in AI adoption was highlighted in the latest EY Future Consumer Index survey of Irish consumers in relation to their attitudes and expectations in relation to e-commerce. 83% of consumers stated that they would not continue membership, subscription or contract with an organisation who experience a major cyber breach. Specific concerns include identity theft (60%), viruses (59%), and selling information to third parties (58%), emphasising the urgent need for strong digital defences to safeguard consumer trust.

The 2024 EY Global Cybersecurity Leadership Insights Study identified a number of specific opportunities for the cyber function to improve AI implementation across a range of areas including supply chain, smart grids, and autonomous vehicles.

Supply chain leaders are already leveraging AI across dozens of use cases and looking to do the same with GenAI across planning, purchasing, manufacturing, logistics and sales. Cyber teams should prioritise more engagement with and continuous monitoring of the supply chain to ensure that this already vulnerable attack surface is protected during broader AI adoption.

Smart energy service networks (ESNs) are using AI and ML to optimise solutions for energy production and consumption, demand response, and grid self-diagnosis. However, the rapid rollout of these technologies has sometimes failed to address cybersecurity concerns.

Cyber leaders have an opportunity to leverage existing AI-powered systems to build real-time cyber threat assessments.

Autonomous vehicles (AV) use AI systems that sense their environment to make decisions, creating a delicate cyber-physical system that can, if not secured adequately, lead to potentially fatal consequences. AV cyberattacks can take on many forms, including attacks on AI-powered control systems, driving components and risk assessment elements. Cyber teams are responding to the threat in various ways including by building AI onboard intrusion detection systems that carefully monitor the vehicle’s operation for anomalous behaviour.

Key actions for cyber leaders

Cyber teams also have a role to play in preventing unauthorised use of and uncontrolled experimentation with AI by employees. These well-intentioned activities or “shadow AI” can lead to data breaches and other cyber risks and organisation need to develop capabilities to detect and prevent them.

Some organisations are forming AI advisory groups to coordinate AI initiatives to tackle the shadow AI problem and improve visibility on AI experimentation. These groups can provide rules around shareable data and set restrictions on sending data outside of the organisation for anyone seeking to utilise AI.

The implementation of strong cybersecurity across all aspects of AI implementation gives organisations the confidence to embrace AI and experiment securely with it, helping them to identify practical applications and clearly define the return on investment.

Here are some critical actions that cybersecurity leaders can take:

Some organisations are forming AI advisory groups to coordinate AI initiatives to tackle the shadow AI problem and improve visibility on AI experimentation. These groups can provide rules around shareable data and set restrictions on sending data outside of the organisation for anyone seeking to utilise AI.

The implementation of strong cybersecurity across all aspects of AI implementation gives organisations the confidence to embrace AI and experiment securely with it, helping them to identify practical applications and clearly define the return on investment.

Here are some critical actions that cybersecurity leaders can take:

EY.ai Maturity Model

We can help you strategically plan to close GenAI gaps, develop an efficient roadmap, and responsibly harness its capabilities.


Summary

AI opens opportunities for cyber leaders to improve the efficiency of their own departments as well as to become enablers for its adoption across the wider organisation. AI adoption expands the attack surface and can expose organisations to new cyber risks. By implementing strong cybersecurity at every stage of the adoption journey, the cyber function can become a key agent of technology transformation.

Related articles

How organisations can simplify the tech environment to stem cyberattacks

Bolstering cyber defences with new technologies may have the opposite effect due to added complexity. Find out why.

    EY.ai - a unifying platform

    A platform that unifies human capabilities and artificial intelligence to help you drive AI-enabled business transformations.

    AI faces

    About this article

    Authors