The enormous power of generative AI (GenAI) and large language models (LLMs) is just beginning to be appreciated. Its capacity to automate and accelerate a vast range of business processes is only starting to be exploited.
As is the case with any new technology deployment, however, GenAI use brings with it new cyber vulnerabilities. Cybersecurity matters emerged as a key area of concern for technology leaders in Ireland amid the surge of AI-enabled cyberattacks. According to the EY Ireland Tech Leaders Outlook Survey 2024, the percentage of respondents who identified elevated cyber risks and the management of data protection and flows as critical challenges has risen to 61%, a notable increase from 53% in 2023.
Similar to the move to the cloud a decade or so ago, the technology will create new cyber exposures and increase the attack surface for cyber criminals. For example, consideration needs to be given to securing the LLMs that gather and analyse data from various departments within the organisation. Ensuring the secure collection and transmission of this data is paramount, as is the fortification and security of the model itself.
Consider a scenario where an organisation employs AI-driven facial recognition technology for secure access control. Here, it’s critical to monitor the AI algorithms for vulnerabilities and secure the data transmission channels. Safeguarding the facial recognition model itself from adversarial attacks is essential to prevent unauthorised access to sensitive areas in such a scenario.
Monitoring emerging vulnerabilities closely
This is not a reason to shy away from the technology. It is simply a reminder that it must be treated in the same way as any new IT investment from a cybersecurity point of view. No organisation would dream of connecting an unsecured PC or laptop to its network and the same approach should apply to artificial intelligence (AI).
AI in cybersecurity is a double-edged sword. Where it empowers organisations with enhanced security capabilities, it equips cybercriminals with similar tools. It enables individuals lacking advanced coding skills to leverage GenAI and create malicious code efficiently. With just a few prompts, GenAI can quickly generate code to identify and exploit vulnerabilities within an organisation's network, a task achievable within minutes.
Just a few more steps are required to get the model to deploy the new cyber weapon.
One example of this is a phishing email. At present, organisations use a variety of means to detect these emails and prevent them from installing ransomware or other nefarious packages into their networks.
These methods usually begin with analysis of the language used in the email. If this does not appear natural, it will be screened out. The message content is also analysed for knowledge of the organisation and its accuracy while its source is compared to lists of safe and unsafe senders and so on. This approach detects the vast majority of phishing emails received by organisations.
In the new world, however, AI can be deployed to make the emails far more convincing. For example, the detection of fraudulent activities can often be influenced by the perpetrator’s native language. The email purports to come from a nearby supplier but the text has been written by someone who is not a native English speaker. The available technology can leverage all available data possessed by cybercriminals pertaining to an organisation to craft highly convincing emails.
It might appear that advanced AI has tilted the balance of power in favour of the cyber criminals, but that is not necessarily the case.
Shift in approach required, not an increase in budget
The good news for organisations and for Chief Information Security Officers (CISOs) is that they do not necessarily have to make significant new cybersecurity investments to restore the balance. The first step is to focus on what you already have.