Deepfake detection, not so long ago guarding a niche corner of the internet, has erupted into a veritable battle royal, as fraudsters send a stream of deepfakes into the arena, spurring the appearance of more and more defenders from the biometrics field.
The latest to emerge from behind the curtain is the DefAI Project, which has its sights set firmly on deepfakes that pose a threat to the age assurance sector. DefAI is, in wrestling terms, a “stable”: a coalition of entities rallying around a common set of goals – and common antagonists.
The names running the project will be familiar to anyone in the age assurance sector: the UK-based Age Check Certification Scheme (ACCS), recently selected to run the Australia age assurance trial. Idiap Labs, the Swiss research institute focused on AI. The Age Verification Providers’ Association (AVPA), which represents independent age assurance providers. And Lausanne-based age assurance vendor Privately SA, which is spearheading the project.
Per its website, this consortium, co-funded by Innosuisse and Innovate UK, aims to “investigate the dangers of presentation and injection attacks on age verification, possible defenses one can build against them, and the evaluation and standardization approaches the industry would require to assess its readiness to withstand such attacks.”
Regulations, GenAI threats increase in tandem, drive need for detection
In an email interview with Biometric Update, George Billinge, a spokesperson for the DefAI Project, says deepfakes present unique threats to biometric age assurance technologies.
“Previous work in this space has focused on facial recognition technologies, but not on age assurance,” he says. “This project draws on work that has already been done looking at the threat of deepfakes to facial recognition technology, but adapts and develops it for the context of age assurance.”
Billinge, a former policy manager at Ofcom who now runs the consultancy Illuminate Tech, notes the surge in global legislation to regulate online services, particularly for young users, without compromising their right to a robust online life. Combined with the emergence and continuing evolution of AI technologies that can be exploited for fraud, the need for effective deepfake detection tools is increasing on both the compliance and customer experience sides.
Project DefAI believes biometric age estimation (AE) technologies are a key part of the puzzle in creating an “age-aware” internet. Since deepfakes can be deployed as presentation or injection attacks used to circumvent age estimation technology, Privately and its partners identified the issue as one of existential concern for the age assurance sector.
“Detection of sophisticated attacks like 3D masks, partial or increasingly realistic deepfake attacks are challenging and pose a serious threat to the reliability of face recognition systems,” Billinge says. “Most of the presentation attack detection (PAD) methods available in prevailing literature try to solve the problem for a limited number of presentation attack instruments and on visible spectrum images.”
Meanwhile, injection attacks present falsified information directly to biometric systems by hijacking the digital feed. “If an attack can be generated in real time, a liveness detection approach may not work on such attacks at all,” Billinge says, explaining that typical attack classifications struggle to generalize for injection attacks and that training is often insufficient.
Deepfake contest ‘a constant game of cat-and-mouse with bad actors’
To tackle this potential threat to their livelihood, DefAI has adopted a stance of vigilant optimism. Billinge says it is predicated on two important assumptions applied to the field of AI-related harms. One, “nothing is impossible to execute.” And two, “new attack vectors will emerge that learn from any defense mechanisms we develop.”
“Developers of solutions designed to tackle online harms are in a constant game of cat-and-mouse with bad actors, attempting to identify trends and respond to them in real time,” Billinge says. “At the same time, we must be stringent in ensuring that the data we are using for our research and development activities is sourced and processed ethically, in accordance with data protection law – while competing with actors who have fewer scruples in this regard.”
DefAI, then, aims to dig deep into the question of what kinds of attack vectors are currently most likely to be used to target biometric age estimation technologies? Finding an answer will mean costing out various kinds of attacks, gathering evidence on attacks that have already happened, and carving out a more defined space for age estimation in the overall digital identity and verification landscape.
The consortium does not have long to execute: the project is set to wrap in summer of 2025. By that point, Billinge says, Privately should have integrated advanced presentation and injection attack defenses into their product, with Ididap providing research support and ACCS evaluating effectiveness. Finally, AVPA will produce and share with its members a report outlining best practices and guidelines for the age assurance industry in dealing with deepfake attacks that keep on coming.
Article Topics
Age Check Certification Scheme (ACCS) | age verification | AVPA | biometrics | deepfake detection | digital identity | Idiap | injection attacks | presentation attack detection | Privately | Project DefAI