At present, organisations are using AI in cybersecurity primarily for detection, response and recovery. For instance, novel neural detectors can enhance network intrusion detection while natural language processing (NLP) and ML can automatically generate cyber threat intelligence (CTI) records. Applications range from detecting human error within the organisation; real-time cyber-threat assessment for smart energy grids; and protecting implantable medical devices from potentially fatal malicious attacks.
AI in cybersecurity is a double-edged sword. Where it empowers organisations with enhanced security capabilities, it equips cybercriminals with similar tools. It enables individuals lacking advanced coding skills to leverage GenAI and create malicious code efficiently. Using existing cybersecurity measures to protect AI systems and applying rigorous due diligence to the purchase of such systems will help deal with the heightened threat as will increased awareness of the new environment¹.
AI will also have a profound impact on cyber talent retention. It will allow employees to focus on more engaging and value-adding work, and to increase their productivity.
AI has the potential to change the nature of cyber teams through a shift from technical cyber practitioners to AI operators and so-called “fine tuners.” Individuals with prompt engineering skills, enabled by the right technology and an AI interface, will be able to do the work that currently requires several penetration testers.
Cybersecurity in the AI adoption journey
Rapid adoption of AI can leave organisations vulnerable to new cyberattacks and compliance risks. Bad actors are already targeting vulnerabilities in AI systems while employees can breach compliance or regulations while using AI, such as by inadvertently exposing sensitive data, intellectual property or restricted material to AI models. This offers the cyber function the opportunity to become a key enabler of AI adoption.
The importance of security in AI adoption was highlighted in the latest EY Future Consumer Index survey of Irish consumers in relation to their attitudes and expectations in relation to e-commerce. 83% of consumers stated that they would not continue membership, subscription or contract with an organisation who experience a major cyber breach. Specific concerns include identity theft (60%), viruses (59%), and selling information to third parties (58%), emphasising the urgent need for strong digital defences to safeguard consumer trust.
The 2024 EY Global Cybersecurity Leadership Insights Study identified a number of specific opportunities for the cyber function to improve AI implementation across a range of areas including supply chain, smart grids, and autonomous vehicles.
Supply chain leaders are already leveraging AI across dozens of use cases and looking to do the same with GenAI across planning, purchasing, manufacturing, logistics and sales. Cyber teams should prioritise more engagement with and continuous monitoring of the supply chain to ensure that this already vulnerable attack surface is protected during broader AI adoption.
Smart energy service networks (ESNs) are using AI and ML to optimise solutions for energy production and consumption, demand response, and grid self-diagnosis. However, the rapid rollout of these technologies has sometimes failed to address cybersecurity concerns.
Cyber leaders have an opportunity to leverage existing AI-powered systems to build real-time cyber threat assessments.
Autonomous vehicles (AV) use AI systems that sense their environment to make decisions, creating a delicate cyber-physical system that can, if not secured adequately, lead to potentially fatal consequences. AV cyberattacks can take on many forms, including attacks on AI-powered control systems, driving components and risk assessment elements. Cyber teams are responding to the threat in various ways including by building AI onboard intrusion detection systems that carefully monitor the vehicle’s operation for anomalous behaviour.
Key actions for cyber leaders
Cyber teams also have a role to play in preventing unauthorised use of and uncontrolled experimentation with AI by employees. These well-intentioned activities or “shadow AI” can lead to data breaches and other cyber risks and organisation need to develop capabilities to detect and prevent them.
Some organisations are forming AI advisory groups to coordinate AI initiatives to tackle the shadow AI problem and improve visibility on AI experimentation. These groups can provide rules around shareable data and set restrictions on sending data outside of the organisation for anyone seeking to utilise AI.
The implementation of strong cybersecurity across all aspects of AI implementation gives organisations the confidence to embrace AI and experiment securely with it, helping them to identify practical applications and clearly define the return on investment.
Here are some critical actions that cybersecurity leaders can take: