Police in the U.S. are making arrests based on facial recognition technology, and those who are being arrested don’t know it’s being used: this according to an exclusive investigation by the Washington Post, which says police departments in 15 states provided it with “rarely seen records documenting their use of facial recognition in more than 1,000 criminal investigations over the past four years.”
The report confirms that many commonly expressed fears about police use of facial recognition are justified. Through arrest reports and interviews, the Post learned that “authorities routinely failed to inform defendants about their use of the software – denying them the opportunity to contest the results of an emerging technology that is prone to error, especially when identifying people of color.”
Biometric testing by the National Institute of Standards and Technology (NIST) has found that facial recognition software is “more likely to misidentify people of color, women and the elderly because their faces tend to appear less frequently in data used to train the algorithms.” Of the seven people wrongfully arrested in the U.S. based on false facial recognition matches and subsequently cleared of all charges, six are black.
Vague language, tight lips on FRT recommended by some police forces
The Post also notes that officers often used language intended to obscure the use of facial recognition; arrests made, for instance, “by utilization of investigative databases.” Some departments encourage their officers to hide or downplay the use of face biometrics tools for law enforcement.
The report, credited to Douglas MacMillan, David Ovalle and Aaron Schaffer, takes the obligatory poke at Clearview AI, a favorite biometric boogeyman, and digs into the lack of clear legislation governing AI and data protection. It notes a tenet of American law known as the “Brady violation,” which says prosecutors must inform defendants about any information that would help prove their innocence, reduce their sentence or hurt the credibility of a witness testifying against them.
“No federal laws regulate facial recognition,” says the Post, “and courts do not agree whether AI identifications are subject to Brady rules.”
Police like facial recognition; ACLU does not
The regulatory lines around police and facial recognition may be hazy, but one thing is clear: law enforcement agencies love the technology (as long as it isn’t being used on them). Some are doing their part to explain how they are using it; the Post also has video of Miami Assistant Chief of Police Armando Aguilar speaking in front of a Senate subcommittee to discuss how his department makes use of facial recognition and AI.
The Post’s investigation notes that “over the past four years, the Miami Police Department ran 2,500 facial recognition searches in investigations that led to at least 186 arrests and more than 50 convictions.” Of those, seven percent were informed that facial recognition had been used. Aguilar says the department is revising their policies to require clear disclosure in every case involving facial recognition technology.
Police departments moving proactively to amend their policies on FRT would mark a change from the current situation, in which rules and restrictions are often applied at the end of a lawsuit. Such is the case for Michigan’s FRT policy for police, which is now among the nation’s strongest after police settled a suit filed by the American Civil Liberties Union (ACLU) and Robert Williams, a Detroit man who was wrongfully arrested in front of his family after being misidentified by facial recognition software.
The ACLU has come out strongly against facial recognition as a tool for policing, calling it “a massive risk to our civil liberties, particularly for Black men and women and other marginalized communities,” in a blog by Hayley Tsukayama of the Electronic Frontier Foundation (EFF). “That’s why EFF and ACLU California Action supports a ban on government FRT use. Half-measures aren’t up to the task.”
The post goes on to criticize a bill, A.B. 1814, currently before the state legislature, saying it does not include adequate safeguards, that it “fails to even meet the bar of restrictions other police departments have agreed to adopt” and that it will still give police a way to access giant biometric databases managed by firms like (you guessed it) Clearview AI – “a company that’s been sued repeatedly for building its database from scraped social media posts.”
“California should not give law enforcement the green light to mine databases, particularly those built for completely different reasons. This goes against what people are expecting when they give their information to one database, only to learn later that information has been informing police face surveillance.”
Facial recognition ‘not DNA or fingerprints,’ says Maryland police captain
The organization’s Maryland chapter is likewise unhappy with a model policy for the use of facial recognition by police in that state, intended as a template for law enforcement in general. In keeping with the theme, Maryland police are on the defensive, claiming that their use of facial recognition tools comes with no ill will and plenty of restrictions.
Government Technology quotes Montgomery Police Capt. Nicholas Picerno, who says “we don’t treat facial recognition as forensic science. It’s not DNA or fingerprints. It’s a tool that allows us to get leads but the important thing is, I can’t stress that enough, we’re not making arrests based solely on facial recognition leads.”
The ACLU does not believe the current policy includes the “baseline protections” it requested in August. However, legal experts say one amendment to the policy could prevent police from contracting the facial recognition services of the much-feared Clearview AI. And Jake Parker, senior director of government relations at the Security Industry Association, calls the Maryland policy “the most comprehensive law in facial recognition used by law enforcement in the country.”
Debate continues as presidential election looms
At this point, police seem unlikely to stop using facial recognition unless it is made illegal (and perhaps not even then). Meanwhile, legislative and regulatory efforts continue. In September, a White House advisory panel approved a 24-page report setting forth specific actions that all federal law enforcement agencies must undertake when performing real-world testing of AI tools in the field, including AI-enhanced facial recognition technologies. Outgoing U.S. President Joe Biden’s Executive Order on AI will have a long tail in terms of providing momentum. And government standards bodies like NIST are offering support, resources and recommendations.
Who ends up in the White House in January will likely have a major impact on how states conduct law enforcement, and what is and is not allowed by certain actors. The issues of facial recognition, data privacy and the potential abuses attached to biometric technology are likely to be staring us in the face for a long time to come.
Article Topics
biometric identification | biometrics | criminal ID | facial recognition | false arrest | police | United States