New York City’s Administration for Children’s Services (ACS) has been secretly using an AI risk assessment system since 2018 to flag families for additional investigation. This Markup investigation reveals how this algorithm mainly affects families of colour and raises serious questions about algorithmic bias against racialised and poor families in child welfare.
Continue reading “New York City uses a secret Child Welfare Algorithm”Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster
Amsterdam officials’ technosolutionist way of thinking struck once again: they believed they could build technology that would prevent fraud while protecting citizens’ rights through their “Smart Check” AI system.
Continue reading “Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster”What does it mean for an algorithm to be “fair”?
Amsterdam’s struggles with its welfare fraud algorithm show us the stakes of deploying AI in situations that directly affect human lives.
By Eileen Guo and Hans de Zwart for MIT Technology Review on June 17, 2025
House bill targets loopholes that let car insurance companies charge more in Black neighborhoods
Bill reintroduction follows investigation by The Markup and Outlier Media that found insurers target Black neighborhoods for high rates.
By Koby Levin for The Markup on June 11, 2025
The NYC Algorithm Deciding Which Families Are Under Watch for Child Abuse
How a family’s neighborhood, age, and mental health might get their case a deeper look.
By Colin Lecher for The Markup on May 20, 2025
How we investigated Amsterdam’s attempt to build a ‘fair’ fraud detection model
Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.
By Amanda Silverman, Eileen Guo, Eva Constantaras, Gabriel Geiger, and Justin-Casimir Braun for Lighthouse Reports on June 11, 2025
Amsterdam wilde met AI de bijstand eerlijker en efficiënter maken. Het liep anders
Al vaker ging de overheid de mist in met algoritmes bedoeld om uitkeringsfraude te bestrijden. De gemeente Amsterdam wilde het allemaal anders doen, maar kwam erachter: een ethisch algoritme is een illusie.
By Hans de Zwart and Jeroen van Raalte for Trouw on June 6, 2025
Amnesty report (yet again) exposes racist AI in UK police forces
Amnesty International UK’s report Automated Racism (from last February, PDF), reveals that almost three-quarters of UK police forces use discriminatory predictive policing systems that perpetuate racial profiling. At least 33 deploy AI tools that predict crime locations and profile individuals as future criminals based on biased historical data, perpetuating and entrenching racism and inequality.
Continue reading “Amnesty report (yet again) exposes racist AI in UK police forces”Report algorithmic discrimination!
AlgorithmWatch wants to shine a light on where and how algorithmic discrimination can take place. Do you have reason to believe that algorithmic discrimination may have taken place? Then we ask you to report this to us to help us better understand the extent of the issue and the havoc algorithmic systems can wreak on our lives. Your hints can help us make algorithmic discrimination more visible and strengthen our advocacy for appropriate guardrails.
From AlgorithmWatch on May 19, 2025
Dutch Institute for Human Rights creates an evaluation framework for risk profiling and urges organisations to do more to prevent discrimination based on race and nationality
The Dutch Institute for Human Rights has published an evaluation framework for risk profiling intending to prevent discrimination based on race or nationality.
Continue reading “Dutch Institute for Human Rights creates an evaluation framework for risk profiling and urges organisations to do more to prevent discrimination based on race and nationality”‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP
We all know that racist algorithms can harm people across many sectors, and healthcare is no exception. In a powerful commentary published in CellPress, Ferryman et al. argue that racism must be treated as a core ethical issue in healthcare AI, not merely a flaw to be patched after deployment.
Continue reading “‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP”Racist Technology in Action: The Dutch Belastingdienst’s ‘Risk Analysis Model’
For about 20 years, the Dutch tax office used a home-brewed computer system (RAM) that brought the information about millions of taxpayers into one model. KPMG has looked at how the Belastingdienst used this system, and their findings are shocking.
Continue reading “Racist Technology in Action: The Dutch Belastingdienst’s ‘Risk Analysis Model’”Van prostituees tot Belgische pensionado’s: in het privacyschendende RAM-systeem van de Belastingdienst heerste de willekeur
Fiscale controle: Voor toezicht op belastingbetalers was het RAM-systeem van de Belastingdienst cruciaal. Maar van controle of beveiliging was nauwelijks sprake. Donderdag debatteert de Tweede Kamer over de situatie bij de dienst.
By Derk Stokmans and Stefan Vermeulen for NRC on March 12, 2025
In Spain, an algorithm used by police to ‘combat’ gender violence determines whether women live or die
Lobna Hemid. Stefany González Escarraman. Eva Jaular (and her 11-month-old baby). The lives of these three women and an infant, amongst many others, tragically ended due to gender-related killings in Spain. As reported in this article, they were all classified as “low” or “negligible” risk by VioGén, despite reporting abuse to the police. In the case of Lobna Hemid, after reporting her husband’s abuse to the police and being assessed as “low risk” by VioGén, the police provided her with minimal protection, and weeks later, her husband stabbed her to death.
Continue reading “In Spain, an algorithm used by police to ‘combat’ gender violence determines whether women live or die”Opinie: ‘Plan van minister Van Weel om online te surveilleren zal moslims onevenredig hard raken’
De Nederlandse politie en veiligheidsdiensten kennen een traditie van moslimdiscriminatie, schrijft Evelyn Austin, directeur van Bits of Freedom. Zij vreest dat moslims wederom de dupe zijn als politie meer bevoegdheden krijgt om online te surveilleren.
By Evelyn Austin for Het Parool on February 8, 2025
Amnesty demands the Dutch government stop using any and all risk profiling
Amnesty takes a deep dive into the shameful racial and socio-economic discrimination against students in the DUO case (about which we’ve written here, here, and here) in their briefing titled Profiled Without Protection: Students in the Netherlands Hit By Discriminatory Fraud Detection System.
Continue reading “Amnesty demands the Dutch government stop using any and all risk profiling”Racist Technology in Action: AI tenant screening fails the ‘fairness’ test
SafeRent Solutions, an AI-powered tenant screening company, settled a lawsuit alleging that its algorithm disproportionately discriminated against Black and Hispanic renters and those relying on housing vouchers.
Continue reading “Racist Technology in Action: AI tenant screening fails the ‘fairness’ test”My Fight Against Algorithmic Bias
How can algorithms become fairer? An essay by scientist and speaker Robin Pocornie.
By Robin Pocornie for BASF on December 5, 2024
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach.
By Robert Booth for The Guardian on December 6, 2024
Sweden’s Suspicion Machine
Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented.
By Ahmed Abdigadir, Anna Tiberg, Daniel Howden, Eva Constantaras, Frederick Laurin, Gabriel Geiger, Henrik Malmsten, Iben Ljungmark, Justin-Casimir Braun, Sascha Granberg, and Thomas Molén for Lighthouse Reports on November 27, 2024
‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis
More than half of students are now using generative AI, casting a shadow over campuses as tutors and students turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system.
By Will Coldwell for The Guardian on December 15, 2024
Kabinet moet alle risicoprofilering stopzetten
DUO gebruikte een discriminerend risicoprofileringssysteem om studenten te selecteren voor controle op misbruik van de uitwonendenbeurs.
From Amnesty International (NL) on November 21, 2024
Gecontroleerde (oud-)studenten met uitwonendenbeurs krijgen geld terug
Het kabinet betaalt boetes en teruggevorderde studiefinanciering terug aan (oud-)studenten die de uitwonendenbeurs ontvingen. Dat doet het kabinet omdat in het selectieproces van de controles op de uitwonendenbeurs sprake was van indirecte discriminatie. Het bewijs dat bij deze controles is verkregen om te besluiten of iemand wel of niet uitwonend was, had niet gebruikt mogen worden. Dat maakt de besluiten onrechtmatig en deze worden daarom teruggedraaid, schrijft minister Bruins (Onderwijs, Cultuur en Wetenschap) aan de Tweede Kamer. Hij reserveert € 61 miljoen om de zaak recht te zetten.
From Dienst Uitvoering Onderwijs (DUO) on November 11, 2024
Dutch government’s toxic relation with using data to detect social welfare fraud
The latest episode in the twisted series titled ‘The Dutch government is wildly discriminatory, using citizen’s data to seek out social welfare fraud’ has just come out.
Continue reading “Dutch government’s toxic relation with using data to detect social welfare fraud”Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation
We’ve written twice before about the racist impact of DUO’s student fraud detection efforts. The Dutch government has now decided to pay back all the fines and the study financing they held back for all students that were checked between 2012 and 2023.
Continue reading “Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation”Racist Technology in Action: Anti-money laundering efforts by Dutch banks disproportionately affect people with a non-Western migration background
Banks have a requirement to ‘know their customers’ and to look for money laundering and the financing of terrorism. Their vigilante efforts lead to racist outcomes.
Continue reading “Racist Technology in Action: Anti-money laundering efforts by Dutch banks disproportionately affect people with a non-Western migration background”Falsely Flagged: The AI-Driven Discrimination Black Students Face
Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”In the Netherlands, algorithmic discrimination is everywhere according to the Dutch Data Protection Authority
In its 2023 annual report, the Autoriteit Persoonsgegevens (the Dutch Data Protection Authority) is dismayed by how much algorithmic discrimination it encounters while doing its oversight.
Continue reading “In the Netherlands, algorithmic discrimination is everywhere according to the Dutch Data Protection Authority”Racist Technology in Action: Michigan car insurers are allowed to charge a higher premium in Black neighbourhoods
An investigation by The Markup and Outlier Media shows how the law in Michigan allows car insurers to take location into account when deciding on a premium, penalizing the state’s Black population.
Continue reading “Racist Technology in Action: Michigan car insurers are allowed to charge a higher premium in Black neighbourhoods”Why Stopping Algorithmic Inequality Requires Taking Race Into Account
Let us explain. With cats
By Aaron Sankin and Natasha Uzcátegui-Liggett for The Markup on July 18, 2024
Overheid gebruikt nog steeds volop discriminerende algoritmes
Ondanks de toeslagenaffaire gingen vorig jaar overheidsorganisaties door met het gebruik van ‘ondoordachte algoritmes’, schrijft de Autoriteit Persoonsgegevens.
By Jeroen Piersma for Het Financieele Dagblad on July 2, 2024
Na het toeslagen schandaal, waarbij onder andere veel eenoudergezinnen en gezinnen met een migratieachtergrond onterecht van fraude werden beschuldigd, werd pijnlijk duidelijk dat niet alleen mensen discrimineren, maar algoritmes ook
Er werd beloofd dat deze systemen eerlijker zouden worden, maar uit het nieuwe jaarverslag van de Autoriteit Persoonsgegevens blijkt dat er sindsdien weinig is verbeterd. Algoritmes categoriseren mensen met bepaalde kenmerken nog steeds onterecht als risico. Noëlle Cecilia, medeoprichter van Brush AI (@ai.brush) was zondag te gast bij Mandy. Zij maakt algoritmes voor bedrijven en deed een jaar lang onderzoek naar de eerlijkheid en discriminatie ervan. Zij legt ons uit waarom de mindset moet veranderen bij het ontwikkelen van AI-systemen.
By Noëlle Cecilia for Instagram on July 9, 2024
How and why algorithms discriminate
Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.
By Pie Sombetzki for AlgorithmWatch on June 26, 2024
Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts
In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.
Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”Are you 80% angry and 2% sad? Why ‘emotional AI’ is fraught with problems
AI that purports to read our feelings may enhance user experience but concerns over misuse and bias mean the field is fraught with potential dangers.
By Ned Carter Miles for The Guardian on June 23, 2024
The datafication of race and ethnicity
The New York Times published a fascinating overview of the American census forms since the late 18th century. It shows how the form keeps trying to ‘capture’ the country’s demographics, “creating and reshaping the ever-changing views of racial and ethnic identity.”
Continue reading “The datafication of race and ethnicity”Podcast: Art as a prophetic activity for the future of AI
Our own Hans de Zwart was a guest in the ‘Met Nerds om Tafel’ podcast. With Karen Palmer (creator of Consensus Gentium, a film about surveillance that watches you back), they discussed the role of art and storytelling in getting us ready for the future.
Continue reading “Podcast: Art as a prophetic activity for the future of AI”Racist Technology in Action: MyLife.com and discriminatory predation
MyLife.com is one of those immoral American companies that collect personal information to sell onwards as profiles on the one hand, while at the same suggesting to the people that are being profiled that incriminating information about them exists online that they can get removed by buying a subscription (that then does nothing and auto-renews in perpetuity).
Continue reading “Racist Technology in Action: MyLife.com and discriminatory predation”Image generators are trying to hide their biases – and they make them worse
In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.
By Nicolas Kayser-Bril for AlgorithmWatch on May 29, 2024
Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department
Last year, Investico revealed how DUO, the Dutch organization for administering student grants, was using a racist algorithm to decide which students would get a home visit to check for fraudulent behaviour. The Minister of Education immediately stopped the use of the algorithm.
Continue reading “Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department”