Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster

Amsterdam officials’ technosolutionist way of thinking struck once again: they believed they could build technology that would prevent fraud while protecting citizens’ rights through their “Smart Check” AI system.

MIT Technology Review reported on this last month, with additional reporting by Lighthouse Reports and Trouw.

Smart Check was designed to process welfare applications and calculate fraud risk scores and was trained on data from previous welfare fraud investigations. Its ‘fairness’ effort to reduce bias included reducing the number of variables the city had initially considered to calculate an applicant’s score and excluding variables that could introduce further bias, such as gender, nationality, age and postal code. After each round of testing, Smart Check’s discriminatory parameters were ‘fixed’ to not be biased against race and gender, subsequently assigning lower weights to these parameters. The city consulted experts, ran bias tests, implemented technical safeguards, and solicited feedback from the people who would be affected by the program, following what they considered to be the ‘ethical AI playbook’. However, they ignored the negative advice of the Participation Council, a 15-member advisory committee composed of beneficiaries, advocates, and other nongovernmental stakeholders who represent the interests of the people the system was designed to help and to scrutinise.

The results of the actual system, when the pilot was rolled out, were disastrous, as the system’s bias now had a propensity for flagging men with Dutch nationality, due to the lower weighting against communities that usually experience a negative impact.

Our own Hans de Zwart was flabbergasted when he first heard about the system in his role as a ‘critical friend’ of the city’s algorithm registry. He doesn’t think it is legitimate to use data on past behaviour to judge the future behaviour of citizens that fundamentally cannot be predicted and to use opague algorithms that depoliticise the decision-making.

This failure, once again, highlights the need for systems that do not prioritise tweaking algorithms but fundamentally rethink how welfare systems operate, prioritising human dignity over fraud detection and addressing the real challenges welfare recipients face: rising costs, bureaucratic burdens, and systemic discrimination.

See: Inside Amsterdam’s high-stakes experiment to create fair welfare AI at MIT Technology Review.

Image by Chantal Jahchan for MIT Technology Review.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑