Automated welfare fraud detection system contravenes international law, Dutch court rules

By on 09/02/2020 | Updated on 24/09/2020
The court ruled that the Dutch government’s ‘risk indication system’ legislation fails a balancing test in Article 8 of the European Convention on Human Rights. (Image by S. Hermann & F. Richter, Pixabay).

A Dutch court has ruled that an automated surveillance system using artificial intelligence (AI) to detect welfare fraud violates the European Convention on Human Rights, and has ordered the government to cease using it immediately. The judgement comes as governments around the world are ramping up use of AI in administering welfare benefits and other core services, and its implications are likely to be felt far beyond the Netherlands.

The Dutch government’s risk indication system (SyRI) is a risk calculation model used by the social affairs and employment ministry. It gathers government data previously held in separate silos – such as housing, employment, personal debt and benefit records – and analyses it using an algorithm to identify which individuals might be at a higher risk of committing benefit or tax fraud. It is deployed primarily in neighbourhoods with a high proportion of low-income and minority residents.

The case against the government was brought by a number of civil society organisations, including the Netherlands Law Committee on Human Rights, and two citizens. They argued that poor neighbourhoods and their inhabitants were being spied on digitally without concrete suspicion of individual wrongdoing.  

The court found that the SyRI legislation fails a balancing test in Article 8 of the European Convention on Human Rights (ECHR), which requires that any social interest – in this case to prevent and combat fraud in the interest of economic wellbeing – be weighed against the violation of individuals’ privacy, and is therefore unlawful. 

Lack of transparency

The court also found the legislation to be “insufficiently clear, verifiable… and controllable” and criticised a lack of transparency about the way the system functions.

According to the New York-based non-governmental organisation, Human Rights Watch, the Dutch government refused to disclose during the hearing “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud, particularly the risk models and risk factors applied. 

The state does not agree that the system violates human rights, and says SyRI legislation contains sufficient guarantees to protect individuals’ privacy, according to a press release. It is not clear whether the government will appeal the decision.

The UN special rapporteur on extreme poverty and human rights, Philip Alston, said in a statement that the verdict was “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.

The decision “sets a strong legal precedent for other courts to follow,” he added. “This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds.”

The judgement – which comes at a time when EU policymakers are working on a framework to regulate AI and ensure that it is applied ethically and in a human-centric way – does not bar governments from using automated profiling systems. However, it makes clear that human rights law in Europe must be central to the design and implementation of such tools.

The effect of the ruling is not expected to be limited to signatories of the ECHR. Christiaan van Veen, director of the digital welfare state and human rights project at New York University School of Law, said, as reported by The Guardian, that it was “important to underline that SyRI is not a unique system; many other governments are experimenting with automated decision-making in the welfare state”.

“This strong ruling will set a strong precedent globally that will encourage activists in other countries to challenge their governments.”

About Mia Hunt

Mia is a journalist and editor with a background in covering commercial property, having been market reports and supplements editor at trade title Property Week and deputy editor of Shopping Centre magazine, now known as Retail Destination. She has also undertaken freelance work for several publications including the preview magazine of international trade show, MAPIC, and TES Global (formerly the Times Educational Supplement) and has produced a white paper on energy efficiency in business for E.ON. Between 2014 and 2016, she was a member of the Revo Customer Experience Committee and an ACE Awards judge. Mia graduated from Kingston University with a first-class degree in journalism and was part of the team that produced The River newspaper, which won Publication of the Year at the Guardian Student Media Awards in 2010.

4 Comments

  1. mike roberts says:

    attached is the Netherlands decision one of a number setting a high barrier

  2. Mike Roberts says:

    Dutch courts in the lead in Europe again

  3. Joanne Boyden says:

    Ask the people of the State of Michigan who were tagged for unemployment fraud and their lives ruined because of an AI algorithm!!! 47,000 people were accused (wrongly!!) of Unemployment fraud because of some MACHINE that said they were doing something wrong! Lives ruined because of this Artificial Intelligence system with NO human backup!!!

Leave a Reply

Your email address will not be published. Required fields are marked *