By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner
Accountability is about people with power using it properly and where those people are the police, their use of power can have profound consequences. If their power is amplified by technology, accountability extends to its use. In policing, advanced technology comes with advanced accountability.
As artificial intelligence (AI) comes to policing it will therefore bring greater complexity in accountability mechanisms that will differ in some respects from other sectors. All public bodies need to use technology and they all need to field standard questions like ‘what exactly does it do and why do you need it?’ People want to know what happens if it goes wrong and where can they find out more about it. This is entry level answerability where policing is no different from any other public service: they must provide answers to questions about their use of tech-enabled capabilities. Artificial intelligence brings new challenges for public sector organizations, particularly when answering ‘how does it work’ questions as accountability demands both transparency and understanding. In common with other publicly answerable authorities, any police service that has bought and deployed technology it can’t explain won’t find much comfort in a black box defense.
Artificial intelligence presents law enforcement with different, more demanding, accountability requirements depending on what they’re doing with it. If they’re using some elemental AI for purely administrative functions, the police will have the same level of accountability as other state agencies. For example, if a local police chief decides AI to issue uniforms to save staff time, that is not materially different from the local authority doing it. But using an AI capability for operational policing functions would be a wholly different use case. Ordering boots off the shelf isn’t the same as ordering boots on the street. Where the AI-enabled technology is being used for a law enforcement purpose such as remote biometric identification and covert surveillance, the context ups the ante and policing should anticipate deeper levels of accountability. While it’s true that, once we start talking about inferential algorithms that calculate your age, mood or race, the stakes are much higher whoever is using it, AI-grade accountability in policing will need to address the specific risks and requirements. They will need to reflect the legal refinements (such as the EU AI Act) and reinforce the necessary safeguards.
Policing comes to AI with strong record of accountability for adopting innovative technology in the UK. Biometrics, breathalyzers and body worn video are examples. What makes AI different? Two things: novelty and latency. Artificial General Intelligence (AGI) brings existential considerations and, when faced with existential challenge, we tend to put our faith in what has got us this far. But we haven’t been here before and with AGI we won’t have the comfort of familiarity. Extraordinary technological capability is coming fast and, if we’re going to trust it for law enforcement purposes, AI will need to bring embedded accountability with it. While police accountability has always been less about having the answer and more about having to answer, AI capability will be of a very different order from that of any previous technology. Harnessing its potential will mean re-engineering traditional answerability models. Policing should get ready to answer for its use of AI and also for not using it in circumstances where it offers an available and legitimate tactical option in preventing and solving crime and keeping people safe.
Being infinitely multi-functional, AI-enabled capability is all but guaranteed to go beyond its original brief. Single-purpose AI capability is an oxymoron. With technology in an almost perpetual beta state, AI will offer ever wider applications as a constant. This is one of AI’s strengths. It’s also the genesis of ‘function creep’ which can leave legitimate diversification looking disingenuous.
As the US Justice Department has recently recognized, AI comes to policing with the potential to transfigure law enforcement but we have to synchronize innovation with assurance. Effective auditing – internal and external – is a must and effectiveness will be determined by independent verification and assurance structures, along with mechanisms for timely intervention, improvement, and indemnification.
Armed with enhanced capability to do the previously unheard-of, the police will increasingly have to say “just because we can doesn’t mean we are”. To do this convincingly they’ll need embedded accountability. In other words, the more the police could be doing with AI the more important it will be for them to show what they’re not (yet) doing with it, particularly in the UK as our model of policing is based on consent.
To that end, the Centre of Excellence in Terrorism, Resilience, Intelligence and Organised Crime Research (CENTRIC) is working with Innovate UK Business Connect, the North-East Business Resilience Centre and the Metropolitan Police Service to design practical mechanisms and a software tool to assess and implement AI applications for policing (the AIPAS project).
Artificial intelligence offers irresistible potential for safeguarding society. Balancing the now possible with concerns and expectations about the state’s use of AI will be dynamically complex – more gyroscope than see-saw. Given its interdependences and its inexorability, accountability for AI may soon be the ultimate proxy for public trust and confidence in policing.
About the author
Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.
Article Topics
algorithmic accountability | biometric identification | explainability | facial recognition | Fraser Sampson | police | responsible AI