Imagine your team contacts you to follow up on a voice message you left, confirming that the urgent payment you requested for one of your biggest suppliers has been processed, reflecting new terms and amended banking information provided by you. However, you sent no message and made no such request.
With the advancement and proliferation of deep learning and generative AI, synthetic CEO impersonation “deepfakes” similar to this are infiltrating the corporate world with the help of sophisticated online video generation programs that cost as little as $24 a month.1 Threat actors are using deepfake technologies to deceive, threaten and steal information and funds from businesses.
Today, algorithms can replicate an individual’s voice with as little as a three-second recording and pair audio with visual masks, resulting in believable and realistic-looking deepfakes. Cybercrime costs are expected to reach $10.5 trillion by 2025 and up to 80% of companies report that voice or video deepfakes represent real threats to their operations.2, 3 If addressing deepfakes isn’t at the top of your organization’s risk agenda, it should be.
Deepfake dangers – bringing warfare tactics to the boardroom
The apparent legitimacy and realism of artificial communications are setting off alarm bells around the world. Experts estimate that 90% of online content may be synthetically generated by 2026.4
Take for example the deepfake videos impersonating political leaders used to spread disinformation that are increasingly used in modern-day warfare. Not only does this top the risk agendas of government agencies, deepfakes have entered the business environment:
- Almost a quarter of a million dollars was misappropriated from a British energy company after cybercriminals — using voice-synthesizing technology to impersonate the company’s CEO — requested payment be made.5
- In the United Arab Emirates, an organization was defrauded of $35 million after socially engineered email messages and deepfake audio similar to the example above convinced a company employee to transfer funds to an inappropriate bank account as part of ongoing acquisition discussions.6
- And we’re learning that deepfake recordings have proven capable of bypassing two-factor authentication, voice recognition and other security features used by leading organizations across a variety of industries.7
These examples are the tip of the iceberg in terms of how threat actors are using deepfakes for malicious purposes. Organizations with extensive “know your customer” (KYC) validation or those that rely on visual evidence may soon find themselves at risk.
For example, the use of image recognition in the customer onboarding process for platforms like crypto-trading or gaming and lotteries could unwittingly accept synthetically generated images of patrons. Wider adoption of voice authentication technology across organizations such as financial institutions could also provide an attack vector for malicious actors in an identity impersonation attack.
And with time and distance creating additional opportunities that threat actors can take advantage of, overnight attacks — targeting a North American company’s Asian division by impersonating an employee to request access to internal systems, for example — could wreak havoc using the difference in respective market hours to these actors’ advantage.
Sniffing out the deepfake
Like deepfake technology itself, solutions are evolving to combat such practices. The maturity of commercial solutions on the market today, however, may be insufficient to reliably detect and protect companies against deepfake attacks. Huge variations in voices and environments such as background noise in audio clips and variations in accents complicate detection. Coupled with the rapid evolution of generative AI models, new algorithms used to generate deepfake videos and audio clips are continually being introduced. These evolving capabilities will make up-and-coming deepfakes increasingly difficult to detect, demanding that detection software evolves at the same pace.