Facial recognition technology with mobile phone

Will GenAI change risk or response in the fraud landscape?

As generative AI makes it easier than ever to generate bespoke images, audio or videos, banks should address new impersonation risks. 


In brief

  • How is the fraud landscape changing with the advent of generative AI?
  • What do technical solutions look like?
  • What role do humans still play in fraud detection?

Fraud is probably as old as mankind itself. As our tools and societies evolved, so too did the ways people tried to swindle others. Generative AI is the latest tool in the fraudster’s toolkit.

Banks and insurance companies have benefited from simplified, streamlined onboarding capabilities thanks to video identification and fully digital and remote processes. These are enabled by AI techniques to help verify the submitted documents and identification videos automatically. However, the advent – and general accessibility – of generative AI tools has reset the balance of efficiency and risk. Leaders in the financial services sector should be aware of increased fraud risks due to this technology.

New threats

The broadly available generative AI tools are now multimodal in terms of both input and output, meaning they can target text as well as voice or video. For fraudsters, these techniques promise a faster return on their (time) investment and make it easier to start complex and personalized attacks.
Generative AI tools are used to generate realistic sounding phishing emails in a fraction of the time it would take a human team. And they are often so convincing that they sail past the filters commonly implemented by organizations.

Deepfake
Amount lost by victims of a fake video campaign apparently featuring Elon Musk.

The extent to which generative AI could disrupt our view of risk can be exemplified by various high-profile images and videos that have captured the public imagination. From harmless fun, like the picture of Pope Francis in a puffer coat, to slanderous videos depicting celebrities drunk or in other compromising situations, deepfakes have become a source of entertainment – and misinformation. The same techniques can be used by scammers. According to the US Federal Trade Commission, fake videos of Elon Musk were used to defraud consumers of more than USD 2 million over six months [1] . Convinced by the videos’ apparently authentic messages from the billionaire, the victims transferred large sums in crypto currencies.

 

Even more nefarious is the use of voice AI in a recent case [2] involving a mother who received a call from someone sounding like her daughter. It was claimed that the daughter had been kidnapped and would be released on payment of a ransom. Only after trying to call her daughter directly did the mother realize that the voice on the other end had been AI-generated.

 

Banks are also targeted directly, with the first examples of voice scams becoming public [3]. Wealthy clients are at particularly high risk, as they are more likely to make public appearances, which provides material that can be used to train AI to impersonate their voice. Combined with social engineering to gather data on their banking relationships, this demands increased scrutiny of telephone instructions for authenticity in the future.

 

New approaches to fight fraud

Financial service providers rely on being able to identify their customers properly and can only continue to do so by acknowledging and responding effectively to these threats. Some steps can include:

  • Amending the anti-fraud operating model to make deepfake detection an integral part of detection and prevention controls.
  • Carrying out regular threat assessments to continuously identify, evaluate and understand potential risks and vulnerabilities within products and services that can be targeted by deepfakes.
  • Ensuring systematic evaluation and analysis of potential deepfake-related risks that the organization or its customers may encounter.

Of course, while emerging technology is the source of this new threat, it’s also expected to come to the rescue. This starts with metadata screening and the addition of digital watermarks on the output of products by providers of generative AI models [4]. Unfortunately, malicious actors can circumvent precautions like these. Many are pinning their hopes on artificial intelligence itself, especially on reinforcement learning, to detect AI-generated content. This will initially be easier for images, while fraud using voice and videos will remain harder to detect. The market for detection tools is growing quickly. 

For now, human judgement and vigilance are the most important lines of defense.

Most solutions are based on recognizing whether a person in a video shows the typical signs of being human, e.g., blinking, skin with moles, physically correct reflections in glasses, facial hair; these are often missing in AI-generated video personas. The other source of information is biometrics, collected from the input device, e.g., a smartphone or computer, indicating how the user is handling it and raising an alarm if these biometrics deviate markedly from expectations.

Although products are improving, most tech-based detection solutions working solely with the image, voice or video in question currently still struggle to establish authenticity with confidence. They also all require contextual information to deliver reliable output, such as the channels via which the data was received and whether it matches expected customer behavior, and need timely threat intelligence to be aware of new fraud types. For now, human judgement and vigilance are the most important lines of defense. This means educating employees on recognizing and responding to deepfakes as vital step in bolstering an organization’s fraud defenses.

Summary

Generative AI makes it easier for fraudsters to create convincing impersonations in order to obtain assets or information. While technical detection solutions are emerging, human vigilance is still essential to counter this threat.

About this article

Related articles

The EU AI Act: What it means for your business

The EU regulation for artificial intelligence is coming. What does it mean for you and your business in Switzerland?