Australian law enforcement officials want changes to privacy legislation that would enable them to use facial recognition to identify victims and perpetrators of child sexual abuse material (CSAM).
This was the message from the inaugural Safer AI for Children Summit held in Sydney on Tuesday, as representatives from Australia’s federal and state police departments spoke at the International Centre for Missing and Exploited Children (ICMEC), as reported by Information Age.
The argument made was that existing privacy laws, as well as public concern, prevented police from using AI technology. These representatives pointed to negative public perception of AI use by law enforcement, which stemmed partly from previous reports of police misuse.
This relates to Australian police members secretly using facial recognition software by U.S. company Clearview AI. In 2021, the Office of the Australian Information Commissioner (OAIC) found Clearview AI had breached the country’s Privacy Act when it collected biometric data without consent.
Limited supplier options
That data collection is both a problem from a regulatory compliance perspective and a key feature that makes Clearview applicable to the use case. People from all around the world who do not have prior criminal records are included in it, and therefore could be identified by the company’s algorithm. Clearview was selected by Ecuador’s International Centre for Missing and Exploited Children (ICMEC) earlier this year to help with CSAM investigations.
Clearview’s closest competitors in the field of facial recognition for CSAM investigations are Marinus Analytics and Thorn.
The Australian police have previously faced media scrutiny in its calls to use facial recognition in its efforts to investigate child exploitation.
“There’s a lot of trepidation in the law enforcement community around using AI, particularly when some of us got burned pretty badly a few years ago in relation to some use of certain products,” says Simon Fogarty, manager of Victoria Police’s victim identification team, as quoted in Information Age.
Fogarty, however, made the distinction between scraping data online and AI search tools that can be used offline. Meanwhile, coordinator of AI and emerging technology for the Australian Federal Police (AFP) Hiro Noda said law enforcement needed to be “very transparent” about their AI usage, the way it would be deployed, and how society would benefit.
The rise of generative AI
Generative AI has allowed offenders to create artificial CSAM more quickly, and the emerging technology also allows the manipulation of real photos so that it becomes more difficult to tell which images are real.
Acting commissioner of the AFP Ian McCartney argued that AI could be deployed in this battle, as it could help with victim identification through facial recognition, while police should remain “accountable and responsible” when using the technology.
Adele Desirs, a victim identification analyst at Queensland Police, told the ICMEC summit that AI could help her work faster, easier, and more accurately, since CSAM analysts like her faced a deluge of material to investigate.
AI was also invoked as a way to decrypt encrypted communications and files. The AFP’s McCartney asserted that end-to-end encryption as used by platforms such as Telegram, Meta’s WhatsApp, Instagram and Facebook Messenger systems has “significantly” impacted the police’s ability to identify CSAM offenders.
Desirs pointed to how many CSAM cases arose from encrypted platforms, which made the job of law enforcement harder, and said technology companies could “push further” to help officers track down CSAM offenders and victims.
However, Australia regional director of policy for Meta Mia Garlick sought to defend the company’s use of encryption and told the summit that Meta assists police and identifies malicious activity in its spaces that are not encrypted.
Article Topics
Australia | biometric identification | biometrics | children | Clearview AI | criminal ID | facial recognition | Marinus Analytics | police | Thorn