African American IT Engineer in Data Center

Why CISOs must cultivate a cyber-secure workforce in the age of AI

Scaling AI in enterprises presents security challenges, including cyber-attack risks and the need for better training in responsible AI usage.


In brief:

    • Organizations scaling AI solutions create efficiency but raise cybersecurity concerns as staff mishandle sensitive data and fall prey to AI-powered scams.
    • An EY 2024 survey highlights that 80% of respondents worry about AI's role in cyber attacks and 39% lack confidence in responsible AI use.
    • Of CISOs, 64% are not satisfied with non-IT workforce adoption of cybersecurity best practices, underscoring a need for better employee training to reduce risk.

    After months of experimentation, organizations are moving to implement Artificial Intelligence (AI) solutions at scale and the enterprise software they already use for daily workflows is increasingly AI-powered too. While they hope to reap dividends in efficiency, productivity and creativity, for the cyber function, this transition requires careful navigation.

    Already, companies have reported challenges as their employees rush headlong into AI. Staff have dropped sensitive intellectual property (IP) into external AI models. They have been fooled by AI-powered deepfakes, as with the Chief Finance Officer (CFO) requesting a transfer of funds reported in Hong Kong1. Nearly 80% of respondents to the 2024 EY Human Risk in Cybersecurity Survey (via EY.com US) expressed concern about the use of AI in carrying out cyber-attacks and 39% said they were not confident they knew how to use AI responsibly.

    The promise of contemporary AI is to democratize access to advanced analytics across business units and staff, far beyond the confines of the IT department. But this only magnifies a longstanding worry among cyber professionals about security practices and awareness in the workforce. Nearly 50% of literature around organizations’ cyber management involves training and education, comprising the largest topic in this space, according to EY analysis. Furthermore, 64% of Chief Information Security Officers (CISOs) polled by the global EY organization are not satisfied with non-IT workforce adoption of cybersecurity best practices. How can organizations better prepare their workforce for the cyber risks that come with advanced AI adoption?

    Safer technology; safer workforce

    User-friendly interfaces are a hallmark of contemporary AI, offering non-tech staff the ability to perform more advanced data and analytics workflows through channels like natural language querying. But that simplicity is deceiving. Beneath the surface lies software and supply chain complexity about which many enterprises lack visibility, especially in second-, third- or fourth-party solutions. Users need to understand how data is being used, such as in training for models, as well as the risks around data breaches and leakage.

    The amount of corporate data funnelled into chatbots by employees rose nearly fivefold2 from March 2023 to March 2024, according to one study of 3 million workers. Among the technology sector employees, 27.4% of that data was classified as sensitive, up from 10.7% the previous year. This puts organizations at higher risk of data exfiltration and the bypassing of security controls and processes. Threats mount when more powerful AI solutions access more data, as developers try to apply AI to datasets that are not yet authorized, classified or authenticated, amplifying any weaknesses in existing practices and protocols.

    The cybersecurity implications of AI use in the wider workforce accentuate a longstanding concern among CISOs and their teams about weak adherence to cybersecurity protocols. According to the EY 2023 Global Cybersecurity Leadership Insights Study, 64% of CISOs were not satisfied with the non-IT workforce’s adoption of cybersecurity leading practices. Among respondents, weak compliance to established leading practices beyond the IT department was cited as the third-biggest internal cybersecurity challenge and human error continued to be identified as a major enabler of cyberattacks.

    Cybersecurity across the organization
    of CISOs were not satisfied with the non-IT workforce’s adoption of cybersecurity best practices

    Firms have long struggled with the “shadow IT” phenomenon, in which software solutions are adopted ad hoc and outside established governance frameworks. AI is worsening the problem as there are so many tools and solutions now available to teams, with potentially more significant risks of data and IP exposure as employees feed more sensitive information into AI systems, such as confidential customer details, source code and research and development materials. This is taking place amid the already frenetic pace of digital initiatives, in which the cyber function must carefully balance lending its support and experience to enable digital transformation without leaving the organization exposed.

    It also comes at a time of rising regulatory concern, as governments appreciate how cyber breaches can ricochet through an economy and impact critical infrastructure. Regulatory bodies are increasing obligations surrounding disclosure of cybersecurity incidents, with executives becoming personally liable for failures in some instances.


    With AI comes potentially more significant risks of data and IP exposure as employees feed more sensitive information into AI systems, such as confidential customer details, source code and research and development materials.


    Stronger armor: A three-pronged approach to technology, governance and operations

    Given the competitive pressure on AI adoption, organizations must not allow cybersecurity governance to become a barrier to progress. Instead, the function needs new approaches to support responsible acceleration.

     

    To nurture a cyber-secure workforce, the function needs visibility into how AI tools are being used across the business, which requires a three-pronged approach centered on technology, governance and operations.

     

    On the technology front, security and network companies are already developing solutions that enable cyber teams to detect when certain AI services are being used, tracking data flow and lineage and automating compliance through common controls and tests. Others are leveraging data already in an organization’s network to monitor activity, such as documents that are being uploaded or prompts used in a ChatGPT function. AI is also increasingly embedded in incident management processes. But technology is supplemental to a deeper evaluation of a company’s risk profile.

     

    Cybersecurity policy should focus on threat modeling from the outset, including an inventory of third and fourth-party AI services, from the architecture and service itself to the integrations and APIs required. Modeling these threats in aggregate allows organizations to quantify and spot risk and informs the design of appropriate controls. Organizations also need to define the procedures for ensuring data protection and privacy provisions in the development of AI models and be accountable for the outputs of their algorithms. This should include not just compliance requirements but ethical considerations.

    Threat evaluation must be supported by an effective operational system that can evolve to cope with what are essentially “living” AI solutions and data sets by ensuring continuous data verification, classification and scoping, including tagging sensitivity and data criticality. Some companies have as little as 20% of their data tagged or classified, our research has found. Realistically, companies should prioritize tagging and verification for their most critical and sensitive data to ensure they have the right safeguards for issues like identity, access management, data flow, data access and lineage.

    Threat modeling and access are critical to implementing an effective cybersecurity governance model, but organizations must be cognizant of the risk of falling into old and ineffective response mechanisms. One approach is to place an AI expert on the board for a six-month rotation with the power to devise a new governance model, including a focus on education and training. Accountability is also required to ensure responsibility for AI governance is apportioned appropriately covering custody, ownership and use.

    For AI, a cyber-informed workforce to combat employee error

    While exotic AI hacking attempts like deepfake CFO bank transfer requests dominate the headlines, employee error remains the most prominent vulnerability for most organizations. AI and cybersecurity are a new threat vector, requiring controls that prevent unauthorized personnel from intentionally or unintentionally acquiring sensitive information that they may not previously have had access to or interaction with. Indeed, the entire promise of AI is giving employees the chance to query and extract value from more data than before. That can only be delivered if cyber guidance is equally easy for them to obtain.

    One common trait of the successful companies analyzed in our 2023 Global Cybersecurity Leadership Insights Study, dubbed “Secure Creators,” was the integration of cybersecurity to all levels of the organization, from the C-suite to the workforce at large. Only half of cybersecurity leaders overall said their cyber training is effective. Can AI itself deliver more effective cyber communication approaches and give employees the support they seek?

    More sophisticated and intuitive chatbots, for example, could advise on employee questions about sensitive or restricted data, in turn reducing the burnout on cyber teams attending to queries and the frustration of employees wading through lengthy and complex policy documents. Implementing control mechanisms and easy querying can reduce shadow IT risks like dropping sensitive data, IP, or restricted material into AI models.


    AI’s promise lies in giving employees the chance to extract value from more data than before. That can only be delivered if cyber guidance is equally easy for them to obtain.


    Where appropriate, using gamification of cyber training to improve digital literacy to appeal to people’s competitive nature and involve them in learning and reward-driven training programs can improve both engagement and interest. This is particularly key to communicate the risks of AI models that go beyond conventional approaches like email phishing, such as deepfake and synthetic media. Such solutions highlight the myriad positive ways in which technology itself can help tackle mounting cybersecurity challenges.

     

    Chief Data Officers, Centers of Excellence and design patterning

    To be cyber-secure in the AI era, it is not enough to rely on training and technology; organizational re-design, new reporting lines and processes must be pursued to allow reasonable levels of adoption and avoid cyber risk being worked through in case-by-case adoption. Governance protocols should not become a means of freezing AI activity unduly. Instead, companies need to tweak and at times reimagine institutions and leadership reporting to create the right incentives and structures. 

    For instance, Chief Data Officers (CDOs) have tended to focus on harnessing data for business value, with less integration to the technology function and even weaker intersection with the cyber unit and CISOs. That needs to change in the AI era, when a cybersecurity lens is needed through the data management life cycle as more data becomes usable in the business. CDOs must focus more on data governance, quality and privacy and a broader range of skills is now required in the cybersecurity executive team as a whole.

    The breadth of skills needed in today’s function is expanding in several directions at once. Here, we outline some of the many cybersecurity executive profiles that have emerged in recent years. The best approach is to build a team that balances a combination of broad disciplines, with the understanding that each has its own strengths and weaknesses.

    Cybersecurity executive profile

    Area of focus

    Strengths

    Weaknesses

    Security expert

    All things security

    Deep subject-matter expertise

    Lack of business acumen

    Tech advocate

    Technology solutions and tools

    Technology oriented

    Siloed thinking

    Risk and regulatory pros

    Risk, controls and compliance

    Good for highly regulated sectors

    Lack of technology acumen

    Business transplants

    Business integration

    Business connectivity

    Lack of technology and security acumen

    Part-timers and job-splitters

    Split between cybersecurity and other primary roles

    Cost saving

    “Jack of all trades, master of none”

    At the institutional level, Centers of Excellence (CoEs) may become a requirement to coordinate AI adoption across business units and teams. These have a long precedent in the technology sector and are becoming more common in risk-aware industries like financial services. They can streamline and simplify governance requirements, mitigating the shadow IT phenomenon. “Design patterning” could further encourage secure and responsible AI by streamlining processes, architectures and data flows to permit faster deployment with minimal, though not zero, friction.

    Key takeaways for CISOs:

    • The workforce is anxious about cyber risk in the AI era and needs greater support.
      The most cyber-secure companies achieve a high degree of integration across the enterprise. AI itself could improve the quality and relevance of training and advice. Further, it could make cybersecurity policies and procedures easier to understand and access, such as via intuitive chatbots for querying cyber policies, or gamified learning. It can also support the development of more sophisticated control mechanisms, reducing the shadow IT phenomena. CISOs will now play a crucial role in engaging senior leadership before the shadow AI risk grows unmanageable.
    • Threat modeling and evaluation must be comprehensive and continuous.
      In the AI era, the threat landscape is widening to include the many third and fourth-party AI services in the supply chain. Organizations must identify AI solutions and assets to understand risk and compliance exposures. Companies need to evaluate architecture, services and integrations/APIs to quantify and spot risks and guide the development of controls to manage widespread AI usage. They must also invest in robust data classification to ensure AI models do not access and process restricted data and sensitive intellectual property. This modeling needs to be continuous as AI solutions continually target new and under-utilized data sets.
    • Centers of Excellence can provide strategic oversight and institutional coordination for the secure deployment of AI.
      CoEs can play an important role in upholding good governance and best practices for AI-inclined organizations, including through skill sharing and the development of consistent protocols. They enable centralized governance and monitoring and support a consistent approach to AI use, offering better contextualization of AI use cases and improved understanding of enterprise data assets. More broadly, companies should consider organizational re-design, new reporting lines and adapted processes to allow reasonable levels of adoption without cyber risk becoming an issue to be worked through on a case-by-case basis.

    Summary

    Organizations face heightened cyber risks with AI integration, requiring a multi-faceted approach to cybersecurity. Training, governance and operational strategies must evolve to address the complexities of AI, ensuring responsible use and robust data protection. Centers of Excellence emerge as pivotal in orchestrating secure AI adoption and mitigating shadow IT phenomena.

    Related articles

    How can cybersecurity transform to accelerate value from AI?

    With AI adoption across business functions booming, CISOs can reposition cybersecurity from the “department of no” to accelerators of AI value. Learn more.

    Is your greatest risk the complexity of your cyber strategy?

    Organizations face mounting cybersecurity challenges. The EY 2023 Global Cybersecurity Leadership Insights Study reveals how leaders respond. Read more.

      About this article

      Authors

      Contributors