By Paul Seddon
Politics reporter
The government is being urged to share more details of its plans to boost the use of AI to risk-score benefit claims.
The Department of Work and Pensions (DWP) has revealed plans to widen its use of the technology to tackle fraud.
Campaigners say more information is needed to ensure the system does not make biased referrals for benefit investigations.
The department insists it has safeguards in place, and it plans to share more information with MPs.
The DWP has put new technology at the heart of its plan to tackle fraud, which went up during Covid when some in-person checks were suspended.
An estimated £8.3bn was overpaid in benefits this year, down from the year before but double the £4.1bn in the last year before the pandemic.
Since last year, it has used an algorithm to flag potentially fraudulent claims for Universal Credit (UC) advances. These are interim payments for those in urgent need, which are then repaid monthly.
It uses machine learning, a widely-used form of artificial intelligence (AI), to analyse historical benefits data to predict how likely a new claim is to be fraudulent or incorrect.
Claims scored as risky are then referred to civil servants to investigate, with payments put on hold until the referral has been dealt with.
In its annual accounts last week, the DWP disclosed plans to pilot “similar” models to review cases in four areas with high overpayment rates, including undeclared earnings from self-employment and incorrect housing costs.
A date for the full deployment of the models has not been given by the department.
‘Serious risks’
The department says it continually monitors the algorithms to guard against the “inherent risk” of unintended bias, and says caseworkers are not told when cases have been flagged by the model.
But campaign group Privacy International said it had “ongoing concerns” over a “persistent lack of transparency” over how it was being used.
The group told the BBC the DWP had failed to provide “substantive information” about the tools it is using.
It added that an outside body should be handed an oversight role, given the “well-documented serious risks to fundamental rights” from decisions informed by algorithms.
Child Poverty Action Group said it was alarmed by plans for greater use of machine learning, adding “key flaws” in DWP’s digitalisation approach had not yet been addressed.
“Expanding the technology while ignoring calls for transparency and rigorous monitoring of and protections against bias will risk serious harm to vulnerable families,” added chief executive Alison Garnham.
Transparency ‘challenge’
Gareth Davies, the boss of the National Audit Office, the UK’s spending watchdog, has also urged the department to publish details of any potential bias in its machine learning tools to “improve public confidence” in the systems.
In his statement on the accounts, he said the DWP had conceded that its ability to test for unfairness relating to protected characteristics – such as age, race and disability – was “currently limited”.
This was partly because claimants did not always respond to optional questions on their background, but also because certain information had been taken out of its systems for security reasons, he wrote.
The department says it is taking steps to integrate the data in its systems soon, and has committed to reporting to MPs annually on how AI-powered tools are affecting different groups of claimants.
It also argues it faces a “challenge” in balancing calls for more transparency with a desire not to “tip off” potential fraudsters by revealing too much information about how it identifies potential fraud.
The department is expected to respond to the NAO’s recommendations later in the year.
Labour has also backed the use of AI to tackle fraud, with shadow work and pensions secretary Jonathan Ashworth saying it could help tackle criminals “taking the taxpayer for a ride”.
In a speech on Tuesday to the Social Market Foundation, he added that the department’s use of the technology had yet to be “properly scaled”.
The party says it is committed to safeguards to prevent bias in the use of algorithms, although it is yet to set out detailed proposals.