Video Call

Can deepfakes translate into deep trouble for Canadian businesses?

Deepfake communications are creating a world where threat actors can position themselves to be anyone and say anything to disrupt and influence society.


In brief

  • Deepfakes — audio or video that has been digitally altered — are increasingly being used to impersonate public figures, spread misinformation and commit commercial fraud.
  • As generative artificial intelligence (AI) and disruptive technologies continually evolve and are made accessible to anyone, threat actors can now produce high-quality deepfakes quickly and easily, at little cost to them but great expense to governments and businesses worldwide.
  • Addressing this issue will be critical to avoid falling victim to deepfake-related crimes and losing trust with stakeholders, negatively impacting brand, reputation and an organization’s bottom line.

Imagine your team contacts you to follow up on a voice message you left, confirming that the urgent payment you requested for one of your biggest suppliers has been processed, reflecting new terms and amended banking information provided by you. However, you sent no message and made no such request.

With the advancement and proliferation of deep learning and generative AI, synthetic CEO impersonation “deepfakes” similar to this are infiltrating the corporate world with the help of sophisticated online video generation programs that cost as little as $24 a month.1 Threat actors are using deepfake technologies to deceive, threaten and steal information and funds from businesses.

Today, algorithms can replicate an individual’s voice with as little as a three-second recording and pair audio with visual masks, resulting in believable and realistic-looking deepfakes. Cybercrime costs are expected to reach $10.5 trillion by 2025 and up to 80% of companies report that voice or video deepfakes represent real threats to their operations.2, 3 If addressing deepfakes isn’t at the top of your organization’s risk agenda, it should be. 

Deepfake dangers – bringing warfare tactics to the boardroom

The apparent legitimacy and realism of artificial communications are setting off alarm bells around the world. Experts estimate that 90% of online content may be synthetically generated by 2026.4

Take for example the deepfake videos impersonating political leaders used to spread disinformation that are increasingly used in modern-day warfare. Not only does this top the risk agendas of government agencies, deepfakes have entered the business environment:

  • Almost a quarter of a million dollars was misappropriated from a British energy company after cybercriminals — using voice-synthesizing technology to impersonate the company’s CEO — requested payment be made.5
  • In the United Arab Emirates, an organization was defrauded of $35 million after socially engineered email messages and deepfake audio similar to the example above convinced a company employee to transfer funds to an inappropriate bank account as part of ongoing acquisition discussions.6
  • And we’re learning that deepfake recordings have proven capable of bypassing two-factor authentication, voice recognition and other security features used by leading organizations across a variety of industries.7

These examples are the tip of the iceberg in terms of how threat actors are using deepfakes for malicious purposes. Organizations with extensive “know your customer” (KYC) validation or those that rely on visual evidence may soon find themselves at risk.

For example, the use of image recognition in the customer onboarding process for platforms like crypto-trading or gaming and lotteries could unwittingly accept synthetically generated images of patrons. Wider adoption of voice authentication technology across organizations such as financial institutions could also provide an attack vector for malicious actors in an identity impersonation attack.

And with time and distance creating additional opportunities that threat actors can take advantage of, overnight attacks — targeting a North American company’s Asian division by impersonating an employee to request access to internal systems, for example — could wreak havoc using the difference in respective market hours to these actors’ advantage.

Sniffing out the deepfake

Like deepfake technology itself, solutions are evolving to combat such practices. The maturity of commercial solutions on the market today, however, may be insufficient to reliably detect and protect companies against deepfake attacks. Huge variations in voices and environments such as background noise in audio clips and variations in accents complicate detection. Coupled with the rapid evolution of generative AI models, new algorithms used to generate deepfake videos and audio clips are continually being introduced. These evolving capabilities will make up-and-coming deepfakes increasingly difficult to detect, demanding that detection software evolves at the same pace.

  Are you ready?

   1.Do you have a strong understanding of your fraud detection processes in place to protect your organization and clients?
   2.Have you evaluated the legal implications of deepfakes on your organization and identified ways to mitigate exposure? 
   3.Has your IT infrastructure security evolved to protect your organization against deepfake attacks?

Better together

At EY, we know that curiosity in AI does not equal confidence. We’ve invested US$1.4 billion in developing AI solutions across the globe, and we’re working with AI companies and acquiring new capabilities to better serve EY clients.

Our AI experience in fraud analytics and data security is driving proof of concept efforts in training deep learning models to detect deepfake audio clips. Additionally, from the policy perspective, we’re proposing measures to address threats, pushing for requirements to watermark AI-generated content and outright ban deepfake content.

There’s little doubt that deepfakes can present a danger. With 63% of people already convinced they’re being lied to by business leaders, according to the latest Edelman Trust Barometer, failure to stay ahead of malicious actors and evolving technologies could have significant, long-term repercussions for organizations and society as a whole.8

Summary

If you’re considering how deepfakes may affect your business, our advisors and risk management framework can help your teams identify and define a robust strategy to help mitigate the threat of synthetic media. Or if you’ve fallen victim to a deepfake attack, our deep experience in crisis management, fraud, incident response and investigations can assist in detecting, identifying and remediating matters.

About this article