How can cybersecurity transform to accelerate value from AI?

By Richard Bergman

EY Global Cyber Transformation Leader

Cybersecurity leader. Forensics guru. Helping organizations face the future with confidence.

Contributors
16 minute read 1 May 2024
Related topics Cybersecurity Consulting

With AI adoption across business functions booming, CISOs can reposition cybersecurity from a perceived barrier to accelerators of AI value.

In brief

  • Cybersecurity practitioners are leveraging AI's accuracy and efficiency to stay ahead of adversaries.
  • Cybersecurity should aid AI deployment and experimentation, and play a strategic, proactive and integrated role in AI ventures across the whole organization.
  • Successful CISOs convey the importance of cybersecurity in AI initiatives, helping the business capture value, transform at speed and seize market opportunities.

The latest advancements of artificial intelligence (AI) and its pace of experimentation across business functions present opportunities and risks for the Chief Information Security Officer (CISO). AI has great potential to ease cybersecurity workloads and the global skills shortage by expanding the scope of task automation, shortening response time and optimizing visibility across the attack surface. But the use of Generative AI (GenAI) across business functions is opening new vulnerabilities that many cyber functions are not currently positioned to address.

The 2023 EY Global Cybersecurity Leadership Insights Study showed that one of the key traits of organizations with the most effective cyber functions, known as “Secure Creators,”1 is their speed in adopting emerging technology in cyber defense, including the use of AI and automation. This speed has in part allowed them to have detection and response times to cyber incidents that are over 50% quicker than other organizations.

For the 2024 Global Cybersecurity Leadership Insights Study, EY teams conducted additional research this year to learn how Secure Creators are responding to the recent surge in AI and GenAI use — both in the cyber function and throughout the enterprise.

Through in-depth interviews and topic cluster analysis of academic research publications, the research found that CISOs are proactively embracing AI in the cyber function but are not yet helping other business functions embed cyber measures into their AI models to a large extent.

If CISOs can fill this gap, it will help them drive value across the organization through safer, more widespread adoption of AI. It also offers them a chance to reposition cybersecurity from “the department of no” to the true enablers of technology transformation.

Local Perspective IconA local perspective

Nordic companies harness AI to advance cybersecurity

Nordic companies are harnessing AI to advance cybersecurity, aligning with global talent to bridge the skills gap and propel themselves forward in the digital safeguarding capabilities. This strategic shift is driven by AI's pivotal role in enhancing their security capabilities amid the rise in new challenges posed by AI.

Responding proactively to these advancements, the Nordic firms remain vigilant; they are investing in a comprehensive approach to cybersecurity including advanced threat intelligence and collaborative policy-making to address and stay ahead of new risks. For Nordic leaders, cybersecurity transcends traditional roles — it's a catalyst for innovation and business expansion, demonstrating how AI integration can drive organizational transformation and elevate market positioning. Acknowledging the dual nature of AI as both a threat and an opportunity, Nordic companies invest in cybersecurity, partnering with universities and governments for balanced legislation and oversight.

Contact us for more insights from the Nordic data.

Local contact

Tim Best
EY Sweden, Advisory, Risk, Cybersecurity, AM&M Cyber Leader
  • About the research

    EY teams conducted topic cluster analysis by examining six years (2017–2022) of cybersecurity-related academic research publications as a proxy for understanding shifting technological trends and lines of innovation in cybersecurity. Using EY SALIENT, over 18,000 publications were analyzed using natural language processing (NLP) to form research topic clusters. Using citation graph analysis, these topic clusters were organized to help create temporal chains that show how an idea is connected to past and future concepts. Utilizing GenAI, we mapped key cyber themes onto the topic clusters, giving a broad view of how general themes and topics in cybersecurity evolve over time.

    In addition, in-depth qualitative interviews were conducted to better understand organizations’ approach to securing the use of AI and how the cybersecurity function itself is leveraging AI for better outcomes. Interviews with cybersecurity leaders were conducted in February 2024 across five different sectors and covering the Americas; Asia-Pacific (APAC); and Europe, the Middle East, India and Africa (EMEIA). Respondents represented organizations with over US$1 billion in annual revenue. For simplicity, we refer to these leaders as CISOs in this study.

Trail runners in the alps
(Chapter breaker)
1

Chapter 1

AI as an ally for cybersecurity

Secure Creators are using AI to stay ahead of threats, needing fewer people and resources to do so.

One key trait of Secure Creators’ more effective and adaptive cybersecurity approach is their integration of AI — 62% are using or are in the late stages of adopting AI or machine learning (ML) vs. 45% of other organizations.

AI in cybersecurity is not new. The relationship can be traced back to the 1980s, and EY analysis reveals a sharp rise in AI-related cyber research, patents and investment since 2015. AI is now part of 59% of all cyber patents and is the top technology explored in cyber research since 2017.

Today, Secure Creators are integrating AI into their detection, response and recovery processes in new ways, allowing them to stay ahead of adversaries, who themselves are using AI attack methods unhindered by regulations or use policies.

By rapidly analyzing enterprise-scale data, AI can automatically detect different attack signatures and new attack methods. With the proper architecture, AI can plug into existing cyber approaches across IT and OT systems to detect incidents faster than people alone.

Advances in deep learning and neural networks now enable the parsing of larger and more heterogeneous datasets in real time. The ability to self-train and learn is accelerating automation, helping cyber teams continuously monitor networks and applications, detect and forecast threats in near real-time, and respond to incidents faster. Deep learning also improves cyber accuracy and efficiency. A meta-analysis of 69 research studies shows an average accuracy of over 92% in detecting spam, malware and network intrusions.2

Gajan Ananthapavan, Global Head of Security Operations, Intelligence and Influence at Australia and New Zealand Banking Group Limited (ANZ Bank), says around 30% of the organization’s incident response has been automated, thanks largely to ML and AI. “We ingest more than 10 billion data events each day as part of monitoring, detecting and responding to potential security events and incidents across our environment,” he said. “We wouldn't be able to manage that volume without ML and AI.”

A CISO at a large North American asset management firm says the company has cut its mean time to detect and respond by at least 50%. Data from the 2023 study shows that Secure Creators saved over 150 days on average detecting and responding to a data breach.

You need to transform your tech stack before thinking about profiting with things like automation and GenAI. It doesn't make sense try to automate a broken process.
The group CISO at Bupa

AI helps cyber teams be more effective with the same or fewer resources, presenting an opportunity to satisfy the CFO by doing more with less. Early EY analysis points to efficiency gains from the use of AI in cyber defense that can range from 15% to 40%. To get the most efficiency gains, CISOs need to first ensure they reduce complexity and consolidate legacy cyber infrastructure. As the group CISO at Bupa notes, “You need to transform your tech stack before thinking about profiting with things like automation and GenAI. It doesn't make sense try to automate a broken process.”

  • Application of AI in detection, response and recovery

    Today, organizations are leveraging AI in cybersecurity primarily for detection, response and recovery. For instance, our topic clustering analysis shows novel neural detectors can enhance network intrusion detection while natural language processing (NLP) and ML can automatically generate cyber threat intelligence (CTI) records. Applications range from detecting internal human error through user analytics and ML, AI analytics providing real-time cyber-threat assessment for smart energy grids, and even leveraging AI and ML on implantable medical devices to protect against potentially fatal malicious attacks.

    Image Description : Data visualization showing examples of AI applications in cybersecurity across different industry sectors.

  • The emergence of GenAI in cybersecurity defense

    The use of GenAI in cybersecurity is an emerging area of research, with dozens of publications in the last two years alone. GenAI has the potential to complement traditional AI systems in cyber while also enabling wholly new approaches. Its ability to act as a co-pilot that performs summarization and answers questions could help professionals rapidly distinguish between a real threat and a false positive. It might also help improve explainability – a key challenge for many AI-driven cybers tools today.

    Examples of GenAI use cases from recent research3:

    • Cyber threat intelligence (CTI): Enhance the gathering and reporting of CTI, helping parse large volumes of open-source data to create relevant summaries across sources.
    • Risk identification: Automatically identify and prioritize potential cybers risks in software based on user reviews of mobile applications.
    • Synthetic data for training AI: Generate anonymous, synthetic cyber-attack data to train AI systems or cyber professionals.
    • Immersive cyber training: Chat bots allow interactive training, such as more realistic capture the flag (CTF) exercises for IT professionals or enhancing cyber education for students and non-IT professionals.
    • Improved penetration testing: Provide a co-pilot for cyber professionals to do real-time question-and-response to better evaluate threats while also automating aspects of penetration testing.
    • Anomaly and vulnerability detection and resolution: Detect anomalies in security logs, such as SQL injections, and automated vulnerability detection and repair, with examples in web applications and software.

Balancing AI and your people

Striking the right balance between AI-enabled automation and people control will be crucially important for organizations’ accountability to shareholders, boards and regulators. The key for CISOs is to identify the areas where AI-enabled automation is most suited to replace manual processes.

For instance, teams are still producing blueprints for systems to follow, according to Adam Cartwright, CISO at Asahi. “What we'd like is not having to write playbooks in the near future because the AI engine will have the context to understand what an analyst would do in this case and recommend those steps back to us, or even perform them.”

Similarly, Ananthapavan at ANZ Bank stated, “Currently, threat hunting is a manually-intensive process which involves coding and developing scripts, and then running them across our environment. We are looking to automate large parts of that process, to help identify malicious activity and respond faster.”

AI’s impact on retaining cyber talent will also be profound. It will allow employees to focus on more engaging and value-adding work, and to increase their throughput. CISOs report better employee retention thanks to eliminating menial work. It will also allow CISOs to reduce spending on contracting. “It's much easier to implement an AI [use case] than to hire and train and retain staff. It can handle a much greater amount of information in a shorter amount of time,” says one CISO from an Asian-headquartered electronics manufacturer.

CISOs are also eyeing a nascent shift from technical cyber practitioners to AI operators and “fine tuners.” Employees with prompt engineering skills, enabled by the right technology and an AI interface, can do the work of multiple penetration testers.

  • Questions for CISOs to ask potential AI vendors

    One deployment challenge referenced by Kostas Georgakopoulos, global CISO at Mondelez, is that CISOs and organizations must contend with excessive vendor hype over their AI offerings. In the EY CEO Outlook Pulse, two-thirds of CEOs note the sharp increase in companies claiming to be experienced in AI, make it hard to identify credible partners or acquisition targets.

    Key things to ask vendors include:

    • Where is the product getting the data to guide its automation?
    • What steps have been taken to verify that data’s accuracy?
    • Is our data is being used to train cybersecurity products or third-party models?

    Beyond vendor selection, CISOs should also consider:

    • How are we measuring performance beyond just accuracy?
    • How are people involved in supervising AI-powered systems?
    • What level of alert thresholds should be set to balance automation benefits and the potential of employees lowering their guard and genuine attacks go undetected?

Actions for CISOs:

  • Expand the scope of automation: Produce a detailed audit of the cyber team’s automatable tasks and consider where people insight is best focused while ensuring attention to explainability and appropriate thresholds. Assess automation capabilities of third-party vendor products, prioritizing the implementation of functionality within vendor-partner software above custom automation use case.
  • Consider the entire enterprise when evaluating AI for cybersecurity: This includes corporate environments, plants and field-level assets.
  • Stay up to date on emerging applications of AI in threat detection and recovery: While not yet emerging in a meaningful way, EY topic clustering analysis shows threat detection and recovery are active areas of inquiry. Considerations should be given to applications in these areas, such as recovery planning, analysis of incident reports, and prevention of zero-day attacks.
  • Follow the data: AI and ML investments are going to be most profitable where there is cyber data density. Focus on areas such as identity management, threat and vulnerability, and security operations where large-scale data is difficult to manage.
  • Build for reuse: Certain functions will have broad applicability within cybersecurity and across the enterprise – avoid development and maintenance duplication by centrally managing intake and development. For example, context-based prioritization of event, incident, threat, risk, vulnerability and any other remediation activity should be standardized, built once and used many times.
  • Catch and block, review and release: Implement a model where AI automates tedious, error-prone, high-volume “catch and block” tasks and curates events for cybersecurity professionals' “review and release” decisions that require judgment and authorization.
Woman looking up at climbing wall
(Chapter breaker)
2

Chapter 2

Cybersecurity in the AI adoption journey

Cybersecurity can accelerate the confident adoption of AI across the enterprise.

With organizations implementing AI across the business, the cybersecurity function has a near-term opportunity to become a trusted partner to help others realize the value creation potential from AI-based solutions.

Rapid adoption of AI can leave an organization vulnerable to new cyber-attacks and compliance risks. Cyber teams need to take on a more strategic, proactive and integrated role within the enterprise to install appropriate controls as AI functions and experiments proliferate.

Tackling the cyber threat in AI expansion

Adversaries are already targeting vulnerabilities in AI systems. Security researchers have used prompt injection – engineering prompts to deceive systems into bypassing filters or guardrails – to attack conversational bots from the likes of Bard and OpenAI.4 White hat researchers have demonstrated how data poisoning – feeding malicious data into algorithms to manipulate its output – can be launched on popular data sets at low cost with minimal technical skills.5 In another project, stickers were added to a stop sign to trick an autonomous vehicle into misreading it as a “45 miles per hour” sign.6 Researchers elsewhere crafted inaudible sounds capable of injecting malicious voice commands to AI-powered voice assistants.7

In addition to attacks from bad actors, organizations need to ensure employees do not breach compliance or regulations while using AI, such as by exposing sensitive data, intellectual property or restricted material into an AI model to run queries or perform tasks. “There are so many tools out there, and they work in different ways and with different risks. It’s very easy for someone to sign up and start using them,” says a manufacturing organization’s CISO.

The 2023 study shows that only 36% of CISOs are satisfied with the non-IT workforce’s adoption of cyber best practices. The need for AI cyber training and education is further evidenced in recent academic research trends from our topic clustering analysis. Nearly 50% of literature around organizations’ cyber management involves training and education, comprising the largest topic in this space. Additionally, 23% of that research includes the intersection of AI with training and education — an area that workers are looking for more guidance. According to upcoming EY research, only 62% of US workers say their employer has made educating employees about responsible AI usage a priority.

Cartwright at Asahi also argues that AI tools for generating outputs like customer insights need to be properly managed in terms of consent and data re-use protocols. “You've got to make sure that the development environments, and particularly the data science development environments, have strong controls and are well-protected,” he says. The interviewees also noted the importance of instituting explainability, such as ensuring a credit limit decision does not fall foul of anti-discrimination regulations regarding the data it draws from, faulty inferences or misleading proxy data.

Opportunities for cybersecurity to improve AI implementation in different domains:

  • Supply chain

    EY topic cluster analysis shows supply chain vulnerabilities have seen a two-fold increase in research over the past five years. In upcoming EY research, we see supply chain leaders are already leveraging AI across dozens of use cases and looking to do the same with GenAI across planning, purchasing, manufacturing, logistics and sales. Four out of five supply chain leaders believe increased cybersecurity vulnerability is a moderate or major risk, topping the list of GenAI supply chain implementation risks. Cyber leaders should prioritize more engagement with and continuous monitoring of the supply chain to ensure that this already vulnerable attack surface is protected with broader AI adoption.

    Source: EY topic clustering analysis

  • Smart grids

    Smart energy service networks (ESNs) are using AI and ML to optimize solutions for energy production and consumption, demand response, and grid self-diagnosis. However, the rapid rollout of these technologies has sometimes left cybersecurity concerns behind. Cyber leaders have an opportunity to leverage existing AI-powered smart grid analytics to build real-time cyber threat assessments.

    Source: EY topic clustering analysis

  • Autonomous vehicles

    Autonomous vehicles (AV) use embedded AI systems that sense their environment to make decisions, creating a delicate cyber-physical system that could, if not secured adequately, lead to potentially fatal consequences. AV cyberattacks can take on many forms, including attacks on AI-powered control systems, driving components and risk assessment elements. Cybersecurity leaders are getting ahead of the threat, with some building AI onboard intrusion detection systems, which carefully monitors the AV’s operation for any anomalous behavior.

    Source: EY topic clustering analysis

Visibility from the top

Effective CISOs are able to communicate the value of a strong cybersecurity posture up, across and out into the organization. Awareness of AI risks among the C-suite and board provides an opportunity to build upon. As exemplified in the 2023 study, CISOs have already started to expand their influence, with more interaction with the board and more CISOs reporting directly to the C-suite. Building confidence with the board and C-suite is rooted in transparency.

“One thing that that is becoming really important is the ability to engage with businesses transparently so that they feel comfortable picking up a phone and just having a conversation. The days of security being something in the backroom are gone,” says Cartwright at Asahi. He believes transparent conversations with the board and accountability in cyber decisions paves the way for CISOs to become more strategic across the organization.

Members of the C-suite often overestimate the effectiveness of their organization’s overall approach to cybersecurity (48% satisfied versus 36% of CISOs) with gaps smaller for Secure Creators, suggesting more transparency with senior leaders and a shared understanding of risk is important as AI implementation progresses.

The days of security being something in the backroom are gone.
Adam Cartwright, CISO at Asahi

In parallel, organizations need to begin to explore capabilities to detect “shadow AI.” Similar to the early days of cloud, organizations have already fallen victim to well-intended experimentation with AI in non-production environments leading to sensitive data exposure, model theft and excessive, unexpected solution costs due to ungoverned implementations. These sorts of setbacks put both the business value and the risk posture of organizations in question.

A 360 view on enterprise-wide AI

An outward-facing CISO can help an organization improve overall AI adoption by using cybersecurity as a framework for coordination. Some companies are forming AI advisory bodies to provide coordination of AI initiatives which can both tackle the shadow AI problem and improve visibility on AI experimentation. One asset manager has set up such an entity — staffed with representatives across business groups, including cyber — for anyone in the organization seeking to utilize AI, providing rules around shareable data and restrictions on sending data outside of the organization.

Cybersecurity is becoming a core component in operational decisions on the ground too. A CISO from a Europe-based retail manufacturer exemplified this integration, noting, “Procurement is now aware that when they are starting a new project, they will contact us, then we can give them our requirements for upcoming suppliers and upcoming applications they want to use.”

Strong cybersecurity across AI gives teams confidence to experiment securely, helping companies identify practical applications and clearly define the return on investment. One CISO we spoke to says their cyber team helped stand up their own instance of ChatGPT so that other business functions could work within the confines of the organization’s four virtual walls.

Finding a common language in cyber

Breaking down barriers to cybersecurity starts with a familiarity with the topic at large. Asahi credits holding regular cyber-related “lunch and learn” sessions to kickstart the business to think more about cybersecurity. Cartwright noted that these don’t necessarily have to be about the biggest cyber threats like phishing, but the goal is focused on making the topic of cybersecurity accessible for all.

Bupa is another example, where the CISO is trying to bridge the gap with the business by ensuring cyber metrics are included in reporting metrics. "In every business performance committee, we are working to embed cybersecurity metrics, trying to embed metrics in terms of cyber performance. It’s not perfect, it’s a journey, but we are trying to make cyber part of business metrics," says group CISO at Bupa.

Actions for CISOs:

  • Embed cyber professionals into the AI use case identification and intake and governance process: this early-stage insertion will allow for cyber integration commensurate with the sensitivity of the data and business function.
  • Publish and govern AI acceptable use standards across the business: outline the guardrails and guidance under which the business and supporting technologists should design and build AI solutions. Adopt a set of technical cyber controls that align to emerging AI industry standards frameworks like the US National Institute of Standards and Technology’s AI Risk Management Framework and the EU AI Act.
  • Implement AI-specific risk mitigation: consider the unique characteristics and challenges associated with AI systems, such as complexity, adversarial attacks, lack of interpretability, continuous learning and ethical considerations.

EY.ai Generative AI maturity model

Map and visualize current GenAI maturity across the organization independent of and within cybersecurity.

Get started

EY.ai Generative AI Confidence Index

Evaluate the confidence score at the enterprise, portfolio/business unit or solution level across 10 responsible GenAI categories.

Learn more

Woman on push scooter at night in the city near the road
(Chapter breaker)
3

Chapter 3

How AI helps cybersecurity deliver more business value

Productivity gains from AI in cybersecurity allow practitioners to help other business functions adopt AI themselves.

Secure Creators work with the C-suite to help build strategies that drive innovation and value creation across the enterprise. The emergence of AI is another opportunity for cybersecurity functions to demonstrate their value to the organization. With AI applications freeing up time for the cyber team to focus on value-add objectives, cyber professionals can help the rest of the business drive value from AI with confidence.

  • Open image description #Close image description

    Data visualization bar chart showing the impact cybersecurity has on creating value, responding to market opportunities, and the pace of transformation and innovation for Secure Creators compared to other organizations.

CISOs have a near-term opportunity to become a trusted partner, helping teams maximize the value creation potential from the AI tools they look to implement. Helping the business confidently deploy AI can shift the perception of cybersecurity from a team that slows things down to one that enables confident technology adoption at an accelerated pace. By setting up processes that incorporate cyber early, other functions will gain efficiencies by minimizing budget issues or delays.

Integrating cyber into AI initiatives is an opportunity for cyber functions to expand their influence across the organization. Leading cyber teams are showing their input can inform better decisions on everything from acquisitions to supply chain governance. One CISO is deeply involved in the holistic evaluation of acquisition targets, something increasingly important given CEOs’ increased appetite for M&A in 2024.8 The same CISO builds confidence with shareholders by providing information and assurance regarding the firm’s ability to protect their information. Similarly, ANZ’s Ananthapavan and his team provide strategic threat intelligence that feeds into business decision-making.

One CISO emphasizes how cybersecurity can drive sales and increase the bottom line by creating confidence with customers. "You have a number of customers who are putting more and more focus on certifications, questionnaires, ascertaining that their suppliers are meeting a certain level of diligence and us demonstrating that we do so either helps to keep existing customers happy or open new markets for us, in which case, we should be able to quantify that for the business.”

AI also enables the cyber function to make decisions and conduct analyses quicker, streamline processes for cost savings, and reduce the need for additional employees. This can be essential for complying with regulatory demands but also to quickly respond to market opportunities.

Actions for CISOs:

  • Establish AI principles and guardrails to support experimentation: As businesses rapidly experiment and adopt AI it is essential for CISOs to move quickly to protect and accelerate the rate of innovation.
  • Help the business get use cases to market faster: Develop a pre-configured and pre-sanctioned set of architectures, integration patterns and technology stack components to support business use cases. Make secure by design the fastest route to market in your organization.
  • Target cyber enablement: Leveraging a practical AI security and risk framework to aid in getting to “yes” for the business while remaining within risk tolerances reversing the perception that cybersecurity is the business prevention department.
  • Gain visibility of the AI attack surface and third-party ecosystem: Many CISOs have spent a lot of time in front of their Boards and executive teams responding to third-party data breaches. Our research showed that Secure Creators have strategies in place to manage all cyber risks across the attack surface and their third-party ecosystem. Expanding this to cover new AI attack surfaces will allow organizations to adopt AI with confidence.

While leading organizations are enthusiastic in onboarding AI for cybersecurity, they are still at the early stages in bringing cybersecurity across the business as it implements AI. The most successful CISOs will be those who can articulate the value of cybersecurity to the enterprise in the AI era, beyond narrow definitions of security, giving the business confidence that they can adopt AI securely.

AnnMarie Pino, Associate Director, EY Insights, Ernst & Young LLP; Michael Wheelock, Associate Director, EY Insights, Ernst & Young LLP and Ryan Gavin, Supervising Associate, EY Insights, contributed to this article.

Our ecosystem awards and recognitions

EY is proud to be recognized for the transformative value we help our clients realize.

Explore our recognitions

  • Show article references #Hide article references

    1. Secure Creators were identified through statistical modeling of the 2023 study’s survey data.
    2. Performance Comparison and Current Challenges of Using Machine Learning Techniques in Cybersecurity, Shaukat, et. Al. 2020
    3. Large Language Models in Cybersecurity: State-of-the-Art (arxiv.org), https://arxiv.org/pdf/2402.00891.pdf
    4. The Guardian: UK cybersecurity agency warns of chatbot ‘prompt injection’ attacks; https://www.theguardian.com/technology/2023/aug/30/uk-cybersecurity-agency-warns-of-chatbot-prompt-injection-attacks
    5. IEEE Spectrum: Protecting AI Models from “Data Poisoning”; https://spectrum.ieee.org/ai-cybersecurity-data-poisoning
    6. Autoblog: Researchers hack a self-driving car by putting stickers on street signs; https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers/?guccounter=1
    7. Latest Hacking News: Study Reveals Inaudible Sound Attack Threatens Voice Assistants; https://latesthackingnews.com/2023/03/27/study-reveals-inaudible-sound-attack-threatens-voice-assistants/
    8. EY CEO Outlook pulse survey

Summary

Secure Creators are advanced in their usage of AI for cybersecurity but are still at the early stages of using it to promote AI usage across the business. The most successful CISOs will be those who can articulate the value of cybersecurity to the enterprise in the AI era, giving the business confidence to adopt AI securely.

About this article

By Richard Bergman

EY Global Cyber Transformation Leader

Cybersecurity leader. Forensics guru. Helping organizations face the future with confidence.

Contributors
Related topics Cybersecurity Consulting