EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
Discover how EY's Next generation security operations & response team can help your organization manage leading-class security operations in a programmatic way.
Read more
A multifaceted way to accelerate cyber attacks based on AI and ML
Among chief information security officers (CISOs) and the C-suite, only 1 in 5 consider their cybersecurity effective today and well positioned for tomorrow, according to the EY 2023 Global Cybersecurity Leadership Insights Study. On average, participants say they responded to 44 cyber incidents in 2022, and for 76% of those incidents, detection and response took six months or longer. Over the past five years, known cyber attacks have increased about 75%, according to the Cyber Events Database at the Center for International and Security Studies at Maryland. The exact frequency of AI-driven attacks is unknown, but it is evident that the use of AI in cyber attacks is increasing and poses a growing danger to both organizations and individuals.
Adversarial ML attacks can utilize an organization’s own AI algorithms to produce harmful inputs and contaminate legitimate algorithms. Through data contamination, hackers manipulate the data used to train an AI model, leading to incorrect decision-making, such as misidentifying malicious code as safe. AI algorithms can also be exploited through adversarial ML, where attackers use methods to trick the AI system into making incorrect decisions, resulting in the system failing to detect or overlooking malicious activities.
More concerning than the progress of attackers’ abilities is the shift in their tactics. Traditional strategies were aimed at avoiding detection and bypassing an organization’s cybersecurity measures. Now, adversaries are launching attacks directed at cybersecurity controls. According to Microsoft, AI algorithms utilized for detecting malware could be susceptible to data contamination attacks, where attackers introduce malicious software into the training data set, causing the AI to wrongly classify it as harmless.1
Other recent and well-known AI-based cyber attacks have included techniques such as:
- Phishing attacks that have used ML algorithms to generate personalized and convincing phishing emails that are more likely to trick users into giving up their personal information or login credentials
- Data exfiltration, in which an AI program is tricked into divulging personally identifiable information or other proprietary data
- Ransomware that uses ML algorithms to adapt to new security measures and evade detection
- AI automation used to create and distribute malware, allowing attackers to make and spread malware faster and more effectively
- Denial-of-service attacks that use ML algorithms to generate high volumes of network traffic to overwhelm and take down target systems
The risks posed by AI technologies are further exacerbated by the ease with which they can be used. Innovations are being introduced quickly and widely, increasing the pool of potential threat actors that organizations must confront. The new normal will bring:
- A new pool of threat actors with various skill levels and undefined motives — for instance, individuals who are simply curious or are seeking the notoriety of being a “hacker” may not understand the impacts of their actions.
- Attacks becoming more opportunistic and scalable with the benefit of AI. This means that organizations aren’t always targeted but rather attacked because they are vulnerable.
- AI bots can speed up the process of identifying weaknesses in target systems or launch focused attacks.