EY Humans seen through illuminated lighting equipment. Looking futuristic and abstract.

Why AI fuels cybersecurity anxiety, particularly for younger employees

Workers say they are worried that they are putting their organizations — and careers — at risk, according to new EY survey.

A cornerstone of cybersecurity remains true: while humans may be the weakest link, they are also our greatest advantage in fighting cyber adversaries. Many fall victim to misdirection because of their curiosity, their desire to help or just their limited knowledge of a technology landscape that’s continually evolving. However, if we invest in education training and creating a culture of awareness, humans can become the greatest cyber protectors. Complicating matters, artificial intelligence (AI) has supercharged the scale and complexity of threats — and new EY research shows that employees have both deeper awareness and greater anxiety about cybersecurity, with great divides across generations.

Almost 80% of respondents are concerned about the use of AI in carrying out cyber attacks, and 39% are not confident that they know how to use AI responsibly, according to the 2024 EY Human Risk in Cybersecurity Survey, building off our initial 2022 report. Over 90% say organizations should regularly update their training to keep pace with AI’s evolving role in cyber threats, but only 62% say their employers have prioritized education about responsible AI usage.

Historically, attack vectors come in the form of phishing emails and USBs laced with malware. Lately, attacks surface in the form of social engineering or appeals to human nature, such as requesting donations after natural disasters and global conflicts. Now, AI is making attacks smarter because it more effectively mimics genuine empathetic communication in the way humans think and process, while also lowering the time and human effort required. These attacks are also more complex. Robust, AI-driven deepfakes can actually take on another person’s voice in a virtual meeting or phone call, for example.

These rapidly evolving technologies, combined with threats from maliciously crafty nation-states amid geopolitical tensions, open the door to threats that many employees just don’t understand or feel equipped to confront. Humans can be the lion at the gate protecting the castle, and they can also be the person opening the door to let what they think is the little kitten in. They must be armed with knowledge and training to make the correct decision.

As GenAI increasingly moves into widespread usage at companies, CISOs must cultivate a culture of confidence. Yet the survey also reveals that employees — particularly younger ones — are feeling anxiety instead: retaining a defensive posture often feels as though they are perpetually under attack.

Surprising fears about AI across generations

Our cybersecurity survey highlights a paradox: more knowledge has translated into greater fear. Gen Z and millennial workers feel less equipped than their older colleagues to identify and respond to AI-powered threats, the survey shows. This disparity exists in more basic attacks as well. Despite being digital natives, only 31% of Gen Z respondents feel very confident identifying phishing attempts. Nearly three in four employees in this age cohort said they have opened an unfamiliar link, far higher than millennials (51%), Gen X (36%) and baby boomers (25%). Among all age groups, 53% are worried their organization will be the target of an attack, and 34% fear they may be the ones leaving their organization vulnerable.

Yet respondents also feel that they are nonetheless knowledgeable about cybersecurity overall — for example, 86% of Gen Z describe themselves in this way, compared with 75% in 2022. This highlights a paradox: more knowledge has translated into greater fear. Employees are processing the severity of the risks, but they’re not necessarily feeling more prepared — and, among the younger generations at the earlier stages of their careers, they worry that one false move could lead to repercussions.

We should look at fear as a sign of understanding the relevancy and the risk. Worrying is an indicator that the threat awareness is breaking through. But is that the sentiment employers are seeking to foster?

Gen Z has been warned literally since birth about the good and bad aspects of technology, including how one bad decision early in life can remain online permanently, and how privacy cannot be taken for granted. The EY survey shows that 64% of Gen Z and 58% of millennials fear they would actually lose their job if they ever left their organization vulnerable to an attack.

“Gen Z is more likely to report their cyber mistakes because they understand or at least believe that it’s more likely somebody’s going to find out, and if they don’t report it, they’ll get in more trouble,” Merriman said.


What companies should do instead to instill cyber confidence

Executives must explore ways to promote simplicity, transparency and positive reinforcement. First, make your cyber protocols as simple and transparent as possible. If an employee needed to report something, would they know how? Whom would they turn to? CISOs know the answer to those questions, but our survey shows broad confusion among workers. Younger generations are more likely to not fully understand processes for reporting suspected cyber attacks: 39% for Gen Z and 29% for millennials, compared with 19% for Gen X and 15% for baby boomers.

You have to know your audience too and make reporting super-easy. It’s far too much to ask people to go to a website and be forced to click through numerous screens before they even get to a point where they can describe what they’ve seen and submit or place a call into an automated system.

Related content

Cybersecurity in the age of AI: navigating new frontiers at the RSA Conference

Explore key insights from RSA Conference 2024 on evolving cybersecurity strategies and AI challenges with EY and industry experts.

    Being single threaded on the reporting tools can become a barrier. Different groups prefer different reporting channels, such as a hotline to a human, chatbots and emails. So having multiple channels has become a requirement for today’s organizations. Now with GenAI it is helping to improve cybersecurity by making chatbots that can be queried about different policies and reporting avenues.

     

    Communication is also vital — not just from the CISO but from frontline managers as well. Those messages should stress partnering, not policing: cybersecurity must be everyone’s concern, and to some extent we’re all reacting to the negative and positive aspects of a constant state of disruption.

     

    Instead of “gotcha” phishing traps, Guinn recommends gamification, in which different functions within your organization can compete to demonstrate their level of cyber awareness. Such campaigns against social engineering, such as phishing attacks, can come with awards, whether through head-to-head exercises or by showing overall improvement year over year that should be an integral part of your company’s rewards and recognition programs. It takes your people’s natural curiosity and human competitiveness and channels them in a more helpful direction. In our survey, the respondents who feel “rusty” on training are most fearful of using tech, while 94% of employees who received training within the past year say cybersecurity is a priority.

     

    Within an organization’s cyber function, it’s useful to perform regular tabletop exercises in a limited lab environment, demonstrating the potential fallout and cascading effects in the event of a cyber attack, as well as how well mitigation responses perform. A lab environment is virtualized to reflect your day-to-day reality in your IT infrastructure but quarantined from your actual production environment.

     

    With regard to AI, C-suite and senior leaders must embrace transparency surrounding how AI is developed and deployed enterprise-wide and demonstrate responsible AI practices themselves to mitigate risks. At EY, the mantra is “If you give people tools, give people training,” and it has  published its own commitment to developing and using AI ethically and responsibly, which anyone can access.

     

    Ultimately, employers need to find the happy median between knowing concerns and being concerned. “We need to envision a world where we have healthy skepticism about digital interactions,” Guinn said. “Are you prepared to help your workforce be better sophisticated when a cyber event occurs because they are the weakest link?”

    Ernst & Young LLP (EY US) commissioned a third-party vendor to conduct the 2024 Human Risk in Cybersecurity Survey. The online survey includes 1,000 full-time and part-time US employees over 18 years old whose current job requires the use of a work-issued laptop/computer (i.e., a tech-enabled professional). The sample was balanced across age, gender, household income, race/ethnicity and region. The survey was fielded between March 7 and 15, 2024. The margin of error for the total sample is plus or minus 3 percentage points.

    1. “Why Companies Shouldn’t Try to Catch Employees With Fake Phishing Emails,” The Wall Street Journal, wsj.com/articles/companies-should-not-try-catch-employees-fake-phishing-emails-931fdb7d, 5 June 2023.


    Contact us
    Like what you’ve seen? Get in touch to learn more.

    Related content

    Cybersecurity in the age of AI: navigating new frontiers at the RSA Conference

    Explore key insights from RSA Conference 2024 on evolving cybersecurity strategies and AI challenges with EY and industry experts.

    Why AI and machine learning are cybersecurity problems — and solutions

    Hackers are using AI and ML to accelerate threats and exploit vulnerabilities. But you can use them to your advantage. Learn more.

    Four ways to embrace emerging technology with a cybersecurity mindset

    Learn how CIOs and CISOs can carefully balance employee enablement and cybersecurity with these four steps. Read more.

    CIO Survey: will you set the GenAI agenda or follow the leaders?

    Get insights on how CIOs will address the challenges and capture the full benefits of GenAI in the 2024 EY CIO Sentiment Survey.