2. Strike the right balance between AI-enabled automation and people control
The key for CISOs is to identify the areas where AI-enabled automation is most suited to replace manual processes.
For instance, according to Adam Cartwright, CISO at Asahi Group Holdings who was also interviewed for the study, teams are still producing blueprints for systems to follow. To this point he says, “What we'd like is not having to write playbooks in the near future because the AI engine will have the context to understand what an analyst would do in this case and recommend those steps back to us, or even perform them.”
At ANZ, Ananthapavan has similar ambitions. “Currently, threat hunting is a manually-intensive process which involves coding and developing scripts, and then running them across our environment. We are looking to automate large parts of that process, to help identify malicious activity and respond faster,” he explains.
The study suggests AI’s impact on retaining cyber talent will also be profound, allowing employees to focus on more engaging and value-adding work, and to increase their throughput. CISOs report better employee retention thanks to eliminating menial work and lower spending on contracting.
CISOs are also eyeing a nascent shift from technical cyber practitioners to AI operators and “fine tuners”. Employees with prompt engineering skills, enabled by the right technology and an AI interface, can do the work of multiple penetration testers.
3. Tackle the cyber threat in AI expansion head on
Rapid adoption of AI can leave an organisation vulnerable to new cyber-attacks and compliance risks. Cyber teams need to take on a more strategic, proactive and integrated role within the enterprise to install appropriate controls as AI functions and experiments proliferate.
The study found adversaries are already targeting vulnerabilities in AI systems. Security researchers have used prompt injection – engineering prompts to deceive systems into bypassing filters or guardrails – to attack conversational bots from the likes of Bard and OpenAI.2 White hat researchers have demonstrated how data poisoning – feeding malicious data into algorithms to manipulate its output – can be launched on popular data sets at low cost with minimal technical skills.3 In another project, stickers were added to a stop sign to trick an autonomous vehicle into misreading it as a “45 miles per hour” sign.4 Researchers elsewhere crafted inaudible sounds capable of injecting malicious voice commands to AI-powered voice assistants.5
Cartwright at Asahi argues that AI tools for generating outputs like customer insights need to be properly managed in terms of consent and data re-use protocols. “You've got to make sure that the development environments, and particularly the data science development environments, have strong controls and are well-protected,” he says.