Modern neon cyberpunk open space office interior blurred with information technology overlay. Corporate strategy for finance, operations, marketing. Generative AI technology

Why AI and machine learning are cybersecurity problems — and solutions

Hackers are using these technologies to accelerate threats and exploit vulnerabilities. But you can use them to your advantage.


In brief
  • AI-enabled capabilities and algorithms are increasingly part of our home and work lives. The business opportunities are intertwined with risks.
  • AI’s evolution and its versatility make it critical for narrowing the gap between the sophistication of cyber attacks and our ability to minimize the damage.

Artificial intelligence (AI) is transforming how we live, work and communicate — whether with autonomous vehicles, interactive toys, solutions to address the shortage of medical personnel or generative AI (GenAI) platforms. But just as legitimate organizations utilize AI technologies to improve their operations, attackers are employing AI and machine learning (ML) to automate and increase the scope of their attacks, rendering them more effective and efficient.

The fallout has been immediate and growing: 75% of cybersecurity professionals have seen an increase in attacks over the last year, and 85% say that GenAI is likely behind this boom, according to Security Magazine.

Cybersecurity professionals must understand how cyber criminals are increasingly exploiting AI algorithms — which are crucial in helping businesses make smarter, data-driven decisions and used in medical devices and self-driving cars, for example — to launch attacks and steal sensitive information. To reduce these dangers, companies must comprehend the impact of AI and ML on cybersecurity and implement the appropriate measures to secure their systems through threat and vulnerability management.

“AI has already established its usefulness in attackers’ playbooks, and its scope of adversarial use is growing,” says Traci Gusher, EY Americas Data and Analytics Leader. “It’s incumbent upon technology leaders to adopt AI technologies to combat cyber attacks and modernize the cybersecurity workforce.”

Although conventional cybersecurity measures like data protection, access controls and network security are essential for safeguarding AI systems, there are unique, often nuanced changes to how other measures should function.

A multifaceted way to accelerate cyber attacks based on AI and ML

Among chief information security officers (CISOs) and the C-suite, only 1 in 5 consider their cybersecurity effective today and well positioned for tomorrow, according to the EY 2023 Global Cybersecurity Leadership Insights Study. On average, participants say they responded to 44 cyber incidents in 2022, and for 76% of those incidents, detection and response took six months or longer. Over the past five years, known cyber attacks have increased about 75%, according to the Cyber Events Database at the Center for International and Security Studies at Maryland. The exact frequency of AI-driven attacks is unknown, but it is evident that the use of AI in cyber attacks is increasing and poses a growing danger to both organizations and individuals.

 

Adversarial ML attacks can utilize an organization’s own AI algorithms to produce harmful inputs and contaminate legitimate algorithms. Through data contamination, hackers manipulate the data used to train an AI model, leading to incorrect decision-making, such as misidentifying malicious code as safe. AI algorithms can also be exploited through adversarial ML, where attackers use methods to trick the AI system into making incorrect decisions, resulting in the system failing to detect or overlooking malicious activities.

 

More concerning than the progress of attackers’ abilities is the shift in their tactics. Traditional strategies were aimed at avoiding detection and bypassing an organization’s cybersecurity measures. Now, adversaries are launching attacks directed at cybersecurity controls. According to Microsoft, AI algorithms utilized for detecting malware could be susceptible to data contamination attacks, where attackers introduce malicious software into the training data set, causing the AI to wrongly classify it as harmless.1

 

Other recent and well-known AI-based cyber attacks have included techniques such as:

  • Phishing attacks that have used ML algorithms to generate personalized and convincing phishing emails that are more likely to trick users into giving up their personal information or login credentials
  • Data exfiltration, in which an AI program is tricked into divulging personally identifiable information or other proprietary data
  • Ransomware that uses ML algorithms to adapt to new security measures and evade detection
  • AI automation used to create and distribute malware, allowing attackers to make and spread malware faster and more effectively
  • Denial-of-service attacks that use ML algorithms to generate high volumes of network traffic to overwhelm and take down target systems

 

The risks posed by AI technologies are further exacerbated by the ease with which they can be used. Innovations are being introduced quickly and widely, increasing the pool of potential threat actors that organizations must confront. The new normal will bring:

  • A new pool of threat actors with various skill levels and undefined motives — for instance, individuals who are simply curious or are seeking the notoriety of being a “hacker” may not understand the impacts of their actions.
  • Attacks becoming more opportunistic and scalable with the benefit of AI. This means that organizations aren’t always targeted but rather attacked because they are vulnerable.
  • AI bots can speed up the process of identifying weaknesses in target systems or launch focused attacks.

AI as a cyber solution in a changing world

In a world where machines are learning, human scale simply cannot compete. This is where AI becomes an enabler for your organization — not just attackers.

“Without the scale and accuracy of AI, cyber organizations will continue to be burdened by long analysis and decision-making cycles,” says Chris Hall, EY Americas Cybersecurity Growth Leader. “They will also be forced to scale resources in an already depleted talent market.”

AI and ML present excellent tools to augment humans and reduce technology costs across cybersecurity capabilities. For example, AI can solve for missing data points in technologies, facilitate cross-technology data analysis, and make accurate predictions on the threats an organization will face. This also enables AI to make recommendations on how to best resolve identified weaknesses.

AI technologies can understand network architectures, contextualize the assets on the network, build high-fidelity attack paths, and recommend remediation paths that resolve issues in the least disruptive way for business operations. They can also enhance user experiences by developing user profiles and reducing and escalating cybersecurity controls based on behaviors and environmental factors (e.g., step-up authentication). Additionally, the confidence and explainability in AI decision-making reduces the burden on technology teams by confidently enabling automation.

Here are some considerations when evaluating your cyber strategy to protect AI systems:

  • Are your threat and vulnerability management systems equipped to evolve alongside your organization’s use of AI? The dynamic nature of AI systems means that they change and adapt over time, making it challenging to develop monitoring baselines. Cybersecurity professionals require a more fundamental understanding of the application’s functions, architecture, user groups, and inputs and outputs.
  • Do you understand how your AI systems are making decisions? Having visibility into the data the AI is drawing from and how it is weighting that data to make recommendations or take actions is critical for cyber teams. AI explainability helps response teams identify the root cause of a security incident. By providing transparency in AI decision-making, explainability can help ensure more accurate and comprehensive evidence collection.
  • How well is your data being protected to prevent downstream corruption in AI? AI systems rely heavily on large amounts of data, which can be vulnerable to attacks and manipulation. Protecting the data used to train and validate AI models is critical to securing these systems. The impact of data corruption and poisoning can be long-lasting, making it one of the greatest threats to AI systems. While data poisoning can produce inaccurate results, transfer learning attacks focus on retraining your models to create completely different outcomes. Transfer learning attacks most commonly affect well-known algorithms — like those that determine what types of network access are considered normal, for example, thereby subverting anomaly detection.

Summary

AI is not just transforming cybersecurity — it may become the backbone of cybersecurity. In a world where machines are learning, cyber threats are becoming increasingly sophisticated. Human capital cannot scale to deal with the volume or complexity — and even if it could, the relative lack of cyber-qualified resources on the job market would be a limiting factor. Securing AI systems requires a comprehensive approach that addresses the unique challenges posed by these dynamic and data-driven technologies.

About this article

Related articles

How bolder CEOs take charge to shape their future with confidence

EY CEO confidence index assesses CEO sentiment across sector growth, price and inflation, business growth, talent, and investment and technology. Read more

Is your greatest risk the complexity of your cyber strategy?

Organizations face mounting cybersecurity challenges. The EY 2023 Global Cybersecurity Leadership Insights Study reveals how leaders respond. Read more.

How Cybersecurity Managed Services enables business transformation

In this webcast, panelists discuss how cybersecurity managed services protect a business and enable enterprise-wide trust for transformation. Watch now.