European Cyber Week is wrapping up in Rennes, Brittany, and Thales is leaving with a fresh metamodel for detecting AI-generated images.
The French company took up a challenge organized by France’s Defence Innovation Agency (AID), tasking firms to detect images created by AI platforms such as Midjourney, DALL-E and Firefly. According to a release, teams at cortAIx, Thales’s AI accelerator, used the opportunity to develop a metamodel capable of detecting AI-generated deepfakes.
The metamodel is an aggregation of models, each of which assigns an authenticity score to an image to classify it as real or synthetic. Machine learning techniques, decision trees and evaluations of the strengths and weaknesses of each model factor into the analysis of authenticity.
Aggregated models include the Contrastive Language-Image Pre-training (CLIP) method, which compares an image with textual descriptions to find inconsistencies; the Diffusion Noise Feature (DFN) method, which uses diffusion models for deepfake detection based on estimates of the amount of noise needed to cause a “hallucination” in an image; and the Discrete Cosine Transform (DCT), which analyzes spatial frequencies in an image to detect hidden artifacts.
According Christophe Meyer, senior expert in AI and CTO of cortAIx, “aggregating multiple methods using neural networks, noise detection and spatial frequency analysis helps us better protect the growing number of solutions requiring biometric identity checks.”
Thales accelerator, cortAIx, has over 600 AI researchers and engineers, with facilities in Montreal and Paris.
The deepfake detection is coming from inside the browser: Surf
New approaches to deepfake detection continue to emerge by the day. Surf Security, which offers a “Zero Trust Browser,” has launched a beta run to let customers test its neural network-powered audio deepfake detection browser integration.
A release says the Deepwater deepfake detector tool is built into the Surf Security Enterprise Zero-Trust Browser and “can detect with up to 98 percent accuracy whether the person you’re interacting with is a real human or an AI imitation, alerting users to potential deepfake threats within seconds.”
The firm, which is based in the UK and U.S., cites research showing that “deepfake scams have grown by 303 percent in the U.S. year-on-year, and even faster in countries such as Portugal (1700 percent), China (2800 percent), Singapore (1100 percent) and myriad others.”
Such growth is unsurprising given how new deepfakes are and how quickly they have become easy and cheap to create. The risks are widespread, and manifold: criminal gangs leveraging deepfakes for confidence scams; rakish yet synthesized hucksters luring victims into romance scams; reputation damage; data loss; running afoul of regulators. Work-from-home trends and SaaS platforms have driven an even greater need.
Surf uses “military-grade” AI deepfake detection that allows users to verify audio with the click of a button. It works with any audio source within the browser, including online videos or communication software such as Slack, Zoom, Google Chat, Microsoft Teams and WhatsApp.
The system is based on emerging technologies such as state-space models, which can “detect deepfakes across languages and accents by modeling probabilistic relationships between audio frames to show inconsistencies.” Its neural network is trained on deepfakes created using top AI voice cloning platforms, has an integrated background noise reduction feature to clear up audio before processing, and can make a determination in less than two seconds.
Ziv Yankowitz, Surf Security’s CTO, says Deepwater deepfake detector is “the first truly usable, real-time defence against deepfakes.” Surf is planning to integrate image deepfake detection in the future.
Reality Defender gets financial boost from Booz Allen Ventures
A release says deepfake detection firm Reality Defender is the beneficiary of a “strategic investment” from Booz Allen Ventures, LLC, the corporate venture capital arm of military intelligence contractor Booz Allen Hamilton.
“Booz Allen’s leadership in AI security and deep expertise in supporting critical missions will enable Reality Defender to expand our impact at a time when securing communications against deepfakes is paramount,” says Ben Colman, CEO of Reality Defender.
Booz Allen provides AI and cybersecurity for the U.S. federal government. Its head of AI security, Matt Keating, says “sophisticated AI models are increasingly being used to manipulate and deceive, posing a real risk ranging from the battlefield and research labs to financial systems and communities nationwide. To combat these threats, we need tools to validate and secure multimodal content, such as videos, images, audio recordings, and phone calls. Reality Defender meets this need.”
For Booz Allen, the investment is part of a larger portfolio that also includes dual-use commercial technologies such as Credo AI (responsible AI), HiddenLayer (secure AI), LatentAI (AI data compression) and more.
Article Topics
biometric liveness detection | biometrics | Booz Allen | deepfake detection | deepfakes | Reality Defender | Surf Security | Thales Digital Identity and Security