EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Limited, each of which is a separate legal entity. Ernst & Young Limited is a Swiss company with registered seats in Switzerland providing services to clients in Switzerland.
How EY can help
-
Our team works to give you the benefit of our broad sector experience, our deep subject-matter knowledge and the latest insights from our work worldwide.
Read more
The extent to which generative AI could disrupt our view of risk can be exemplified by various high-profile images and videos that have captured the public imagination. From harmless fun, like the picture of Pope Francis in a puffer coat, to slanderous videos depicting celebrities drunk or in other compromising situations, deepfakes have become a source of entertainment – and misinformation. The same techniques can be used by scammers. According to the US Federal Trade Commission, fake videos of Elon Musk were used to defraud consumers of more than USD 2 million over six months [1] . Convinced by the videos’ apparently authentic messages from the billionaire, the victims transferred large sums in crypto currencies.
Even more nefarious is the use of voice AI in a recent case [2] involving a mother who received a call from someone sounding like her daughter. It was claimed that the daughter had been kidnapped and would be released on payment of a ransom. Only after trying to call her daughter directly did the mother realize that the voice on the other end had been AI-generated.
Banks are also targeted directly, with the first examples of voice scams becoming public [3]. Wealthy clients are at particularly high risk, as they are more likely to make public appearances, which provides material that can be used to train AI to impersonate their voice. Combined with social engineering to gather data on their banking relationships, this demands increased scrutiny of telephone instructions for authenticity in the future.
New approaches to fight fraud
Financial service providers rely on being able to identify their customers properly and can only continue to do so by acknowledging and responding effectively to these threats. Some steps can include:
- Amending the anti-fraud operating model to make deepfake detection an integral part of detection and prevention controls.
- Carrying out regular threat assessments to continuously identify, evaluate and understand potential risks and vulnerabilities within products and services that can be targeted by deepfakes.
- Ensuring systematic evaluation and analysis of potential deepfake-related risks that the organization or its customers may encounter.
Of course, while emerging technology is the source of this new threat, it’s also expected to come to the rescue. This starts with metadata screening and the addition of digital watermarks on the output of products by providers of generative AI models [4]. Unfortunately, malicious actors can circumvent precautions like these. Many are pinning their hopes on artificial intelligence itself, especially on reinforcement learning, to detect AI-generated content. This will initially be easier for images, while fraud using voice and videos will remain harder to detect. The market for detection tools is growing quickly.