Young businesswoman with laptop walking by orange wall
Young businesswoman with laptop walking by orange wall

How can cybersecurity transform to accelerate value from AI?

A new global report shows how Australian CISOs can harness AI to deliver more value and position cybersecurity as a trusted business partner.


In brief

  • By integrating AI into cybersecurity detection, response and recovery processes, CISOs can effectively stay ahead of adversaries with fewer people.
  • Cyber teams can proactively work with the enterprise to prevent the development of shadow AI as experimentation proliferates across the business.
  • Helping the business to confidently deploy AI can shift cybersecurity from a team that slows things down to one that enables confident technology adoption.

The latest advancements of artificial intelligence (AI) are a double-edged sword. They offer great potential to ease cybersecurity workloads and address local skills shortages by expanding the scope of task automation, shortening response times and optimising visibility across the attack surface. But the increasing use of Generative AI (GenAI) across business functions is opening new vulnerabilities that many cyber functions are not currently positioned to address.

The 2023 EY Global Cybersecurity Leadership Insights Study showed that one of the key traits of organisations with the most effective cyber functions, known as “Secure Creators,” is their speed in adopting emerging technology in cyber defense, including the use of AI and automation. This speed has in part allowed them to have detection and response times to cyber incidents that are over 50% quicker than other organisations.

For the 2024 Global Cybersecurity Leadership Insights Study, EY teams conducted additional research this year to learn how Secure Creators are responding to the recent surge in AI and GenAI use — both in the cyber function and throughout the enterprise.

The research found that CISOs are proactively embracing AI in the cyber function but to a large extent are not yet helping other business functions embed cyber measures into their AI models. If Oceania CISOs can fill this gap, it will help them drive value across their organisations through the safer, more widespread adoption of AI. This is an opportunity to reposition cybersecurity from “the department of No” to become a true enabler of technology transformation.

Transforming cybersecurity as an AI accelerator

Discover actionable insights for embedding AI into your cybersecurity frameworks

1. Use AI and automation to solve for cybersecurity skills shortages

CISOs can leverage AI to stay ahead of threats while needing fewer people and resources to do so. By rapidly analysing enterprise-scale data, AI can automatically detect different attack signatures and new attack methods. With the proper architecture, AI can plug into existing cyber approaches across IT and OT systems to detect incidents faster than people alone.

Gajan Ananthapavan, Global Head of Security Operations, Intelligence and Influence at ANZ Bank who was interviewed for the study, says around 30% of the organisation’s incident response has been automated, thanks largely to ML and AI. “We ingest more than 10 billion data events each day as part of monitoring, detecting and responding to potential security events and incidents across our environment,” he said. “We wouldn't be able to manage that volume without ML and AI.”

A key trait of a more effective and adaptive cybersecurity approach is integrating AI into detection, response and recovery processes in new ways. Advances in deep learning and neural networks now enable larger and more heterogeneous datasets to be parsed in real time. The ability to self-train and learn is accelerating the impact of automation, helping cyber teams continuously monitor networks and applications, detect and forecast threats in near real-time, and respond to incidents faster. Deep learning also improves cyber accuracy and efficiency. A meta-analysis of 69 research studies shows an average accuracy of over 92% in detecting spam, malware and network intrusions.1

According to the study, such strategies are allowing cyber organisations to stay ahead of adversaries, who themselves are using AI attack methods unhindered by regulations or use policies.

We ingest more than 10 billion data events each day as part of monitoring potential security events. We wouldn't be able to manage that volume without ML and AI.

2. Strike the right balance between AI-enabled automation and people control

The key for CISOs is to identify the areas where AI-enabled automation is most suited to replace manual processes.

For instance, according to Adam Cartwright, CISO at Asahi Group Holdings who was also interviewed for the study, teams are still producing blueprints for systems to follow. To this point he says, “What we'd like is not having to write playbooks in the near future because the AI engine will have the context to understand what an analyst would do in this case and recommend those steps back to us, or even perform them.”

At ANZ, Ananthapavan has similar ambitions. “Currently, threat hunting is a manually-intensive process which involves coding and developing scripts, and then running them across our environment. We are looking to automate large parts of that process, to help identify malicious activity and respond faster,” he explains.

The study suggests AI’s impact on retaining cyber talent will also be profound, allowing employees to focus on more engaging and value-adding work, and to increase their throughput. CISOs report better employee retention thanks to eliminating menial work and lower spending on contracting.

CISOs are also eyeing a nascent shift from technical cyber practitioners to AI operators and “fine tuners”. Employees with prompt engineering skills, enabled by the right technology and an AI interface, can do the work of multiple penetration testers.

3. Tackle the cyber threat in AI expansion head on

Rapid adoption of AI can leave an organisation vulnerable to new cyber-attacks and compliance risks. Cyber teams need to take on a more strategic, proactive and integrated role within the enterprise to install appropriate controls as AI functions and experiments proliferate.

The study found adversaries are already targeting vulnerabilities in AI systems. Security researchers have used prompt injection – engineering prompts to deceive systems into bypassing filters or guardrails – to attack conversational bots from the likes of Bard and OpenAI.2 White hat researchers have demonstrated how data poisoning – feeding malicious data into algorithms to manipulate its output – can be launched on popular data sets at low cost with minimal technical skills.3 In another project, stickers were added to a stop sign to trick an autonomous vehicle into misreading it as a “45 miles per hour” sign.4 Researchers elsewhere crafted inaudible sounds capable of injecting malicious voice commands to AI-powered voice assistants.5

Cartwright at Asahi argues that AI tools for generating outputs like customer insights need to be properly managed in terms of consent and data re-use protocols. “You've got to make sure that the development environments, and particularly the data science development environments, have strong controls and are well-protected,” he says.

You've got to make sure that the development environments, and particularly the data science development environments, have strong controls and are well-protected.

4. Make cybersecurity accessible to all

The study found a key characteristic of an effective CISO is their ability to communicate the value of a strong cybersecurity posture up, across and out into the organisation.

“One thing that is becoming really important is the ability to engage with businesses transparently so that they feel comfortable picking up a phone and just having a conversation. The days of security being something in the backroom are gone,” says Cartwright at Asahi. He believes transparent conversations with the board and accountability in cyber decisions pave the way for CISOs to become more strategic across the organisation.

Members of the C-suite often overestimate the effectiveness of their organisation’s overall approach to cybersecurity, suggesting more transparency with senior leaders and a shared understanding of risk is important as AI implementation progresses.

In parallel, organisations need to explore capabilities to detect “shadow AI”. Similar to the early days of cloud, organisations have already fallen victim to well-intended experimentation with AI in non-production environments leading to sensitive data exposure, model theft and excessive, unexpected solution costs due to ungoverned implementations. These sorts of setbacks put both the business value and the risk posture of organisations in question.

Breaking down barriers to cybersecurity starts with a familiarity with the topic at large. Asahi credits holding regular cyber-related “lunch and learn” sessions to kickstart the business to think more about cybersecurity. Cartwright notes that these don’t necessarily have to be about the biggest cyber threats like phishing, but the goal is focused on making the topic of cybersecurity accessible for all.

5. Help the business to successfully deploy AI

With AI applications freeing up time for the cyber team to focus on value-add objectives, cyber professionals can help the rest of the business drive value from AI with confidence. CISOs have a near-term opportunity to become a trusted partner, helping teams maximise the value creation potential from the AI tools they look to implement. By setting up processes that incorporate cyber early, other functions will gain efficiencies by minimising budget issues or delays.

Integrating cyber into AI initiatives is an opportunity for cyber functions to expand their influence across the organisation. Leading cyber teams are showing their input can inform better decisions on everything from acquisitions to supply chain governance. For example, ANZ’s Ananthapavan and his team provide strategic threat intelligence that feeds into business decision-making.

The 2024 EY Global Cybersecurity Leadership Insights Study explores these and many other insights around how to embrace AI for better cyber outcomes and create value for the whole organisation.


EY.ai Generative AI maturity model

Map and visualize current GenAI maturity across the organization independent of and within cybersecurity.

EY.ai Generative AI Confidence Index

Evaluate the confidence score at the enterprise, portfolio/business unit or solution level across 10 Responsible GenAI categories.


Summary

AI can be a CISO’s ally in both navigating the cyber security landscape and building their organisation’s reputation with the broader business. The most successful will be CISOs who can articulate the value of cybersecurity to the enterprise in the AI era and partner with different functions, giving the business confidence to adopt AI securely.

Related articles

Is your greatest risk the complexity of your cyber strategy?

Organizations face mounting cybersecurity challenges. The EY 2023 Global Cybersecurity Leadership Insights Study reveals how leaders respond. Read more.

How your organization can have confidence in the opportunities AI brings

As interest in ethical AI explodes, the debate is shifting away from “trust” and towards “confidence”, helping unlock valuable use cases. Learn more.

AI and Web3 mix could reshape business models

Organizations should explore the potential of combining AI and Web3 to expedite technology adoption and reinvent the rules of doing business. Learn more.

    About this article

    Authors

    Contributors