The government ministry began testing the use of a machine learning algorithm to detect cases of benefits fraud last year. The algorithm analyzes historical data to predict which cases are likely to be fraudulent in the future, without being explicitly programmed by a human. The plans were outlined in the DWP’s annual report and accounts and were also confirmed in a report by the National Audit Office (NAO) which was published on Thursday July 7.
While campaigners welcomed the crackdown on fraud, many pointed out that the algorithm could mean some people could face discrimination, with their claims being classified as “potentially fraudulent”.
Ariane Adam, Legal Director of the Public Law Project, said: “Despite numerous requests under the Freedom of Information Act, the DWP has previously declined to provide details of its use of automation to assess requests. universal credit. This lack of transparency is very problematic.
“Without transparency there can be no evaluation, and without evaluation it is not possible to say whether a system is operating reliably, legally or fairly.”
Ms Adam said the “discriminatory impact” was a “massive risk” for marginalized or vulnerable groups.
LEARN MORE: How much are disability benefits and what health conditions are eligible?
She added: “It could be, for example, because historical data may be inaccurate or because it may be tainted by human biases which will be exacerbated by the machine.”
In 2021-2022, the model was run to detect fraud in advanced claims already in payment. The DWP now plans to test the model on claims before any payments in 2022-23.
In its annual report, the DWP said: ‘If successful, this could improve its ability to prevent fraud before these benefits are paid, thus avoiding the need to seek redress.’
The NAO report noted that the DWP intends to monitor the model for unintended biases and is aware that groups with protected characteristics are disproportionately affected and that the model could prevent claimants from receiving their money.
DO NOT MISS :
The DWP report said: “It is inevitable that some cases flagged as potentially fraudulent will turn out to be legitimate claims.”
“If the model were to disproportionately identify a group with a protected characteristic as more likely to commit fraud, the model could inadvertently hinder equitable access to benefits.”
He also noted that the final decision on potential fraudulent cases would be made at the discretion of a DWP social worker.
Ms Adam also added that, as the UK was ‘in the midst of a cost of living crisis’, the severity of Britons’ benefits being cut off before they were even paid ‘because a computer algorithm said “no” was very problematic.
READ MORE: Scam warning: Victims lose over £36,000 on average
She said: “Government departments need to commit to much more than just being aware of the risks.
“We need a clear commitment that all departments will be transparent about how they use algorithms.
“The presumption should be that detailed information about the operation of automated decision-making tools is made available and that all data and analyzes collected from trials are published without charities having to make endless requests for access to information.
“Any deviation from this presumption must be justified by the government and be necessary and proportionate. Exemption should not be the default.
Responding to criticism, the DWP said it “did not use artificial intelligence to make decisions about how a Universal Credit application should progress.”
He also said he would “continue to work hard” to be as “transparent as possible” about his claims process “without compromising our ability to identify fraud.”
A DWP spokesperson said: “It’s only right that we track fraud in today’s digital age so that we can prevent, detect and deter those who would try to cheat the system and, more importantly, defraud the system. ‘improve our support for genuine claimants.’