The algorithm that the city of Rotterdam used to predict the risk of welfare fraud fell into the hands of journalists. Turns out that the system was biased against marginalised groups like young mothers and people who don’t have Dutch as their first language.
Reporters from Follow the Money, Lighthouse Reports, Argos and Vers Beton were able to extensively test the algorithm, using data of real citizens. Through isolating individual personal characteristics (for example the gender of a person), they were able to show that certain characteristics led to a higher risk score for welfare fraud.
This is how they managed to find out that if you aren’t able to speak Dutch well, your risk score is double that of somebody with exactly the same profile as you, but with a good grasp of the Dutch language.
The use of the algorithm was paused by Rotterdam after criticisms from the Court of Audit and the City Council. The city now admits that the model they used to assess the risk of fraud was biased:
Over time, we have found that the risk estimation model could never remain 100 percent free of bias or the appearance thereof. That situation is undesirable, especially when it involves variables that carry a risk of bias based on discriminatory grounds such as age, nationality or gender.
Oddly enough, Rotterdam continues to have the ambition to launch a new and improved version of the algorithm, even though it is completely unclear what the benefits of the algorithm would be and how the risks would be mitigated. It stupifies to think about how much effort cities are putting into getting only marginal gains from fighting the marginal problem of the alleged fraud of an utterly marginalised group of people.
See: Zo leerde een Rotterdams fraudealgoritme kwetsbare groepen te verdenken at Follow the Money, or Inside the Suspicion Machine at Wired.