The existence of deepfake detection implies the existence of deepfake detectives. That’s arguably the role of the Kantara DeepfakesIDV discussion group, a collaborative industry effort focused on addressing the threat of deepfakes and related attacks in digital ID verification systems.
Via Paravision’s blog, a new article from the group breaks down “Deepfake Threats and Attack Vectors in Remote Identity Verification.” Led by Paravision’s Chief Product Officer, Joey Pritikin, and based on work from industry experts including Daniel Bachenheimer and Stephanie Schuckers, the article explores methods and attack points for deepfake attacks, with a focus on “deepfake threats and attack vectors in the scope of remote identity verification.”
Attackers, the piece says, “may use deepfake technology to present falsified identities, modify genuine documents, or create synthetic personas, exploiting weaknesses in the verification process.” The deepfake toolkit continues to grow with elements such as face swaps, expression swaps, synthetic imagery and synthetic audio. Attacks may come in the form of physical presentation attacks, injection attacks or insider threats.”Understanding these threats,” say the deepfake detectives, is “crucial for developing robust defenses against the manipulation of identity verification systems.”
Types of deepfakes include face swaps, synthetic media, voice cloning
Much like their technological driver, AI, deepfakes are not one thing, but rather “a class of digital face and voice manipulation and/or synthesis which can be used to undermine a remote digital identity scheme.” Common formats include still and moving imagery, as well as fake audio.
Face swaps replace a person’s face in a video with another person’s face, often seamlessly. Expression swaps allow scammers to control a video avatar’s facial expressions. StyleGAN2-Type Synthetic Imagery faces are highly realistic, but wholly fake, created using generative adversarial networks that use opposing algorithms to refine details. Diffusion-based imagery creates realistic images from textual prompts. And next-gen video tools like Synthesia and HeyGen create fully synthetic avatars that can move and speak like real humans.
On the audio side, there is synthetic speech and voice cloning, which replicates someone’s voice to bypass voice authentication systems.
Common deepfake points of attack include image capture, injection into feed
In standard remote identity verification processes, biometrics such as a selfie photo are captured alongside identity documents. The two are then compared to verify identity. This workflow presents a variety of attack points in both automated and manual workflows.
At the point of front-end image capture, there are risks of physical presentation attacks and injection attacks that hijack the software interface. When capturing identity documents, a deepfake face can be inserted within a physical document, which can be either genuine or wholly synthetic. They can also be digitally injected as a digital image or video into the capture software interface. Manual workflows that use live video chats are also vulnerable to injected video simulating a webcam.
On the back end, there is a risk that deepfake content might be unintentionally stored in a system, or the risk of insider human subversion of a host system.
How deepfakes are evolving into the ultimate fraud tool
As generative AI technology improves, deepfakes will continue to blur the line between real and virtual people. “Deepfake technology is rapidly advancing, producing increasingly realistic and convincing fake content that is harder to detect,” the article says. Improved personalization makes it easier to target specific individuals. New frontiers in processing power are making real-time deepfake creation and manipulation possible. Behavioral mimicry is being fine-tuned to replicate not just appearance and voice, but also mannerisms and behaviors. All in all, deepfakes are becoming easier to create and combine with other scams, and tougher to detect.
A post from Paravision on LinkedIn summarizes the argument for understanding how and when deepfake attacks happen. “From initial biometric capture to live video interactions, understanding these specific points of vulnerability is key to strengthening digital defenses and ensuring secure identity verification.” The deepfake detectives are on the case.
Article Topics
biometrics | deepfake detection | deepfakes | digital identity | fraud prevention | Kantara | Paravision | remote verification