Address cyber threats

How organisations can address cyber threats in the era of advanced AI

Businesses require innovative thinking and fresh strategies to manage new cybersecurity challenges with the deployment of advanced AI systems.


In brief

  • Embracing GenAI to reinforce cybersecurity measures is crucial. However, recognising its potential use as a tool in hackers' arsenals adds a critical dimension to our understanding of cybersecurity threats.
  • Cyber adversaries are looking to exploit vulnerabilities within AI algorithms by manipulating input data, thereby deceiving the system’s decision-making processes.
  • Organisations can effectively counter the emerging threats posed by advanced AI by maximising implementation of their existing cybersecurity investments.

The enormous power of generative AI (GenAI) and large language models (LLMs) is just beginning to be appreciated. Its capacity to automate and accelerate a vast range of business processes is only starting to be exploited.

As is the case with any new technology deployment, however, GenAI use brings with it new cyber vulnerabilities. Cybersecurity matters emerged as a key area of concern for technology leaders in Ireland amid the surge of AI-enabled cyberattacks. According to the EY Ireland Tech Leaders Outlook Survey 2024, the percentage of respondents who identified elevated cyber risks and the management of data protection and flows as critical challenges has risen to 61%, a notable increase from 53% in 2023.

Similar to the move to the cloud a decade or so ago, the technology will create new cyber exposures and increase the attack surface for cyber criminals. For example, consideration needs to be given to securing the LLMs that gather and analyse data from various departments within the organisation. Ensuring the secure collection and transmission of this data is paramount, as is the fortification and security of the model itself.

Consider a scenario where an organisation employs AI-driven facial recognition technology for secure access control. Here, it’s critical to monitor the AI algorithms for vulnerabilities and secure the data transmission channels. Safeguarding the facial recognition model itself from adversarial attacks is essential to prevent unauthorised access to sensitive areas in such a scenario.

Monitoring emerging vulnerabilities closely

This is not a reason to shy away from the technology. It is simply a reminder that it must be treated in the same way as any new IT investment from a cybersecurity point of view. No organisation would dream of connecting an unsecured PC or laptop to its network and the same approach should apply to artificial intelligence (AI).

AI in cybersecurity is a double-edged sword. Where it empowers organisations with enhanced security capabilities, it equips cybercriminals with similar tools. It enables individuals lacking advanced coding skills to leverage GenAI and create malicious code efficiently. With just a few prompts, GenAI can quickly generate code to identify and exploit vulnerabilities within an organisation's network, a task achievable within minutes.

Just a few more steps are required to get the model to deploy the new cyber weapon.

One example of this is a phishing email. At present, organisations use a variety of means to detect these emails and prevent them from installing ransomware or other nefarious packages into their networks.

These methods usually begin with analysis of the language used in the email. If this does not appear natural, it will be screened out. The message content is also analysed for knowledge of the organisation and its accuracy while its source is compared to lists of safe and unsafe senders and so on. This approach detects the vast majority of phishing emails received by organisations.

In the new world, however, AI can be deployed to make the emails far more convincing. For example, the detection of fraudulent activities can often be influenced by the perpetrator’s native language. The email purports to come from a nearby supplier but the text has been written by someone who is not a native English speaker. The available technology can leverage all available data possessed by cybercriminals pertaining to an organisation to craft highly convincing emails.

It might appear that advanced AI has tilted the balance of power in favour of the cyber criminals, but that is not necessarily the case.

Shift in approach required, not an increase in budget

The good news for organisations and for Chief Information Security Officers (CISOs) is that they do not necessarily have to make significant new cybersecurity investments to restore the balance. The first step is to focus on what you already have.


The threat landscape may have changed, but the response does not necessarily need to change with it. If an organisation has already invested in sophisticated threat and vulnerability detection systems, they need to ensure that they are maximising and optimising their use.


It is not a question of a new investment in cybersecurity, rather a new approach. In the same way as the cloud changed the shape of organisations’ networks and cyber defences had to be extended to cover the new expanded perimeter, current defence systems will need modification to bring GenAI models within their orbit.

Stolen credentials present a grave peril to organisations. To bolster security beyond passwords and multi-factor authentication (MFA), organisations can deploy AI-driven solutions that monitor user behaviour for unusual login patterns or atypical actions. These systems scrutinise user interactions with critical infrastructure, can swiftly detect unauthorised access attempts or transactions. Adopting this strategy enhances cybersecurity defences by integrating AI technology that can strengthen existing measures and counter new threats with speed and efficacy.

Procurement processes will also play an important role. Organisations must ensure that they are not buying trouble when they invest in GenAI. They need to interrogate vendors very closely to ensure that the systems they are acquiring are secure and do not bring increased vulnerabilities with them.

Of course, organisations will need to invest in upgrades to guard against the AI-driven increased sophistication of phishing and other cyberattacks, but this can be accommodated within normal cyber budgets.

Finally, it cannot be emphasised enough that GenAI will not offer a silver bullet to organisations seeking to bolster their cyber defences.

Summary 

While organisations exploit the potential of advanced AI, they need to be mindful of the advent of new cyber vulnerabilities. Using existing cybersecurity measures to protect AI systems and applying rigorous due diligence to the purchase of such systems will help deal with the heightened threat as will increased awareness of the new environment. While it undoubtedly offers the ability to further automate certain elements of cyber defence and to enhance threat detection, this will not replace any of the existing cybersecurity systems in place or the human as the last line of defence.

Related articles

How organisations can simplify the tech environment to stem cyberattacks

Bolstering cyber defences with new technologies may have the opposite effect due to added complexity. Find out why.

How a Converged Security Operations Centre can bolster cyber defence

A Converged Security Operations Centre offers greater coordination of multiple security offerings and enables a more rapid threat response. Find out how.

    About this article