Werk je aan beleid of publieke dienstverlening en wil je bijdragen aan een overheid die iedereen gelijkwaardig behandelt? De Discriminatietoets helpt je daarbij.
From Discriminatietoets
Werk je aan beleid of publieke dienstverlening en wil je bijdragen aan een overheid die iedereen gelijkwaardig behandelt? De Discriminatietoets helpt je daarbij.
From Discriminatietoets
The DUO discrimination scandal, where more than 10.000 students were discriminated against, has led to multiple initiatives that aim to prevent this from happening again. None of these addresses the core problems of “predictive optimisation”.
Continue reading “Dutch responses to the DUO scandal miss the point: technical fixes won’t solve the fundamental problems of predictive optimisation”In another example of racial profiling in the Netherlands, banks such as ING, Rabobank, and ABN Amro have subjected Muslims and Palestinian activists to intrusive, discriminatory checks based on allegations of “terrorist financing.”
Continue reading “Racist Technology in Action: Dutch banks racially profile Muslims and Palestinian activists, alleging “terrorist activity””Na jarenlange kritiek is het dan eindelijk zover: De Top400 en Top600 gaan de prullenbak in! Er komt wel een nieuwe aanpak, ‘Veiligheid en Zorg’. De belofte is dat hierin de geleerde lessen op het vlak van stigmatisering en de omgang met jongeren en hun ouders verwerkt zijn.
By Lotte Houwing for Bits of Freedom on December 18, 2025
After being ghosted by numerous recruiters during her unemployment, Aliyah Jones, a Black woman, decided to create a LinkedIn ‘catfish’ account under the name Emily Osborne, a blonde-haired, blue-eyed white woman eager to advance her career in graphic design. The only difference between ‘Emily’ and Jones? Their names and skin colour. Their work experience and capabilities were the same.
Continue reading “She had to catfish herself as a white woman to get a job: AI-mediated racism on LinkedIn and in recruiting”We wrote about the I am not a typo campaign before. They have shared good news with us: “Responding to feedback from customers and working with members of the I Am Not A Typo campaign, Microsoft has implemented product updates to ensure its dictionary better reflects the names of people living in modern, multicultural Britain, using official Office for National Statistics (ONS) baby name data as a guide.”
Continue reading “Microsoft will update its English (UK) dictionaries with a more inclusive name database”For many years and for many people, GeoMatch by the Immigration Policy Lab was a shining example of ‘AI for Good’: instead of using algorithms to find criminals or fraud, why don’t we use it to allocate asylum seekers to regions that give them the most job opportunities? Only the naive can be surprised that this didn’t work out as promised.
Continue reading “Racist Technology in Action: The algorithm that was supposed to match asylum seekers to places with jobs doesn’t work and is discriminatory”The MIT Technology Review shows how the models of major AI companies, like OpenAI’s ChatGPT, reflect India’s caste bias.
Continue reading “Racist Technology in Action: The caste bias in large language models”The Dutch tax office is plagued by one problem after another. The Child Benefits Scandal was supposed to be a wake-up call, but apparently, their system use is so atrophied that they can’t seem to do what is needed. Follow the Money reports on a letter from the Dutch data protection authority to the Minister of Finance, which argues that the tax office uses 50 algorithms that use discriminatory profiling and are therefore potentially unlawful.
Continue reading “50% of the profiling algorithms used by the Dutch tax office are discriminatory and therefore unlawful according to the Data Protection Authority”Bijna alle AI-toepassingen hebben een voorkeur voor witte mannen. De Raad voor Europa riep onlangs op tot actie omdat AI discriminatie, vooroordelen en geweld tegen vrouwen in de hand werkt. Hoe zorg je dat kunstmatige intelligentie niet discrimineert?
By Marijn Heemskerk for Vrij Nederland on July 19, 2025
Last June, researchers quantitatively proved the racist, misogynistic and appalling practices by TikTok. Comparing TikTok’s different metrics of the popular beauty filter, Bold Glamour, and cross-referencing the results with the social media company’s own “inclusivity policies”.
Continue reading “Racist Technology in Action: Scientists show that TikTok is racist, sexist, and disgusting”Amnesty International’s research has found that the introduction of digital technologies into the UK’s flawed and inadequate social security system has, in many cases, led to further hardship for social security claimants. This has negatively affected realization of claimants’ human rights including their rights to social security and an adequate standard of living.
From Amnesty International on July 10, 2025
Peter Jacobs | ceo ING Nederland: ING Nederland concludeert dat klanten zich terecht door haar gediscrimineerd voelen bij haar antiwitwasaanpak. Door onpersoonlijke brieven en onvoldoende kennis. „Weet je, als jij niet weet wanneer de ramadan plaatsvindt, dan ben je verbaasd als een moskee opeens heel veel meer contant geld inlevert.”
By Eva Smal for NRC on May 26, 2025
New York City’s Administration for Children’s Services (ACS) has been secretly using an AI risk assessment system since 2018 to flag families for additional investigation. This Markup investigation reveals how this algorithm mainly affects families of colour and raises serious questions about algorithmic bias against racialised and poor families in child welfare.
Continue reading “New York City uses a secret Child Welfare Algorithm”Amsterdam officials’ technosolutionist way of thinking struck once again: they believed they could build technology that would prevent fraud while protecting citizens’ rights through their “Smart Check” AI system.
Continue reading “Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster”Amsterdam’s struggles with its welfare fraud algorithm show us the stakes of deploying AI in situations that directly affect human lives.
By Eileen Guo and Hans de Zwart for MIT Technology Review on June 17, 2025
Bill reintroduction follows investigation by The Markup and Outlier Media that found insurers target Black neighborhoods for high rates.
By Koby Levin for The Markup on June 11, 2025
How a family’s neighborhood, age, and mental health might get their case a deeper look.
By Colin Lecher for The Markup on May 20, 2025
Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.
By Amanda Silverman, Eileen Guo, Eva Constantaras, Gabriel Geiger, and Justin-Casimir Braun for Lighthouse Reports on June 11, 2025
Al vaker ging de overheid de mist in met algoritmes bedoeld om uitkeringsfraude te bestrijden. De gemeente Amsterdam wilde het allemaal anders doen, maar kwam erachter: een ethisch algoritme is een illusie.
By Hans de Zwart and Jeroen van Raalte for Trouw on June 6, 2025
Amnesty International UK’s report Automated Racism (from last February, PDF), reveals that almost three-quarters of UK police forces use discriminatory predictive policing systems that perpetuate racial profiling. At least 33 deploy AI tools that predict crime locations and profile individuals as future criminals based on biased historical data, perpetuating and entrenching racism and inequality.
Continue reading “Amnesty report (yet again) exposes racist AI in UK police forces”AlgorithmWatch wants to shine a light on where and how algorithmic discrimination can take place. Do you have reason to believe that algorithmic discrimination may have taken place? Then we ask you to report this to us to help us better understand the extent of the issue and the havoc algorithmic systems can wreak on our lives. Your hints can help us make algorithmic discrimination more visible and strengthen our advocacy for appropriate guardrails.
From AlgorithmWatch on May 19, 2025
The Dutch Institute for Human Rights has published an evaluation framework for risk profiling intending to prevent discrimination based on race or nationality.
Continue reading “Dutch Institute for Human Rights creates an evaluation framework for risk profiling and urges organisations to do more to prevent discrimination based on race and nationality”We all know that racist algorithms can harm people across many sectors, and healthcare is no exception. In a powerful commentary published in CellPress, Ferryman et al. argue that racism must be treated as a core ethical issue in healthcare AI, not merely a flaw to be patched after deployment.
Continue reading “‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP”For about 20 years, the Dutch tax office used a home-brewed computer system (RAM) that brought the information about millions of taxpayers into one model. KPMG has looked at how the Belastingdienst used this system, and their findings are shocking.
Continue reading “Racist Technology in Action: The Dutch Belastingdienst’s ‘Risk Analysis Model’”Fiscale controle: Voor toezicht op belastingbetalers was het RAM-systeem van de Belastingdienst cruciaal. Maar van controle of beveiliging was nauwelijks sprake. Donderdag debatteert de Tweede Kamer over de situatie bij de dienst.
By Derk Stokmans and Stefan Vermeulen for NRC on March 12, 2025
Lobna Hemid. Stefany González Escarraman. Eva Jaular (and her 11-month-old baby). The lives of these three women and an infant, amongst many others, tragically ended due to gender-related killings in Spain. As reported in this article, they were all classified as “low” or “negligible” risk by VioGén, despite reporting abuse to the police. In the case of Lobna Hemid, after reporting her husband’s abuse to the police and being assessed as “low risk” by VioGén, the police provided her with minimal protection, and weeks later, her husband stabbed her to death.
Continue reading “In Spain, an algorithm used by police to ‘combat’ gender violence determines whether women live or die”De Nederlandse politie en veiligheidsdiensten kennen een traditie van moslimdiscriminatie, schrijft Evelyn Austin, directeur van Bits of Freedom. Zij vreest dat moslims wederom de dupe zijn als politie meer bevoegdheden krijgt om online te surveilleren.
By Evelyn Austin for Het Parool on February 8, 2025
Amnesty takes a deep dive into the shameful racial and socio-economic discrimination against students in the DUO case (about which we’ve written here, here, and here) in their briefing titled Profiled Without Protection: Students in the Netherlands Hit By Discriminatory Fraud Detection System.
Continue reading “Amnesty demands the Dutch government stop using any and all risk profiling”SafeRent Solutions, an AI-powered tenant screening company, settled a lawsuit alleging that its algorithm disproportionately discriminated against Black and Hispanic renters and those relying on housing vouchers.
Continue reading “Racist Technology in Action: AI tenant screening fails the ‘fairness’ test”How can algorithms become fairer? An essay by scientist and speaker Robin Pocornie.
By Robin Pocornie for BASF on December 5, 2024
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach.
By Robert Booth for The Guardian on December 6, 2024
Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented.
By Ahmed Abdigadir, Anna Tiberg, Daniel Howden, Eva Constantaras, Frederick Laurin, Gabriel Geiger, Henrik Malmsten, Iben Ljungmark, Justin-Casimir Braun, Sascha Granberg, and Thomas Molén for Lighthouse Reports on November 27, 2024
More than half of students are now using generative AI, casting a shadow over campuses as tutors and students turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system.
By Will Coldwell for The Guardian on December 15, 2024
DUO gebruikte een discriminerend risicoprofileringssysteem om studenten te selecteren voor controle op misbruik van de uitwonendenbeurs.
From Amnesty International (NL) on November 21, 2024
Het kabinet betaalt boetes en teruggevorderde studiefinanciering terug aan (oud-)studenten die de uitwonendenbeurs ontvingen. Dat doet het kabinet omdat in het selectieproces van de controles op de uitwonendenbeurs sprake was van indirecte discriminatie. Het bewijs dat bij deze controles is verkregen om te besluiten of iemand wel of niet uitwonend was, had niet gebruikt mogen worden. Dat maakt de besluiten onrechtmatig en deze worden daarom teruggedraaid, schrijft minister Bruins (Onderwijs, Cultuur en Wetenschap) aan de Tweede Kamer. Hij reserveert € 61 miljoen om de zaak recht te zetten.
From Dienst Uitvoering Onderwijs (DUO) on November 11, 2024
The latest episode in the twisted series titled ‘The Dutch government is wildly discriminatory, using citizen’s data to seek out social welfare fraud’ has just come out.
Continue reading “Dutch government’s toxic relation with using data to detect social welfare fraud”We’ve written twice before about the racist impact of DUO’s student fraud detection efforts. The Dutch government has now decided to pay back all the fines and the study financing they held back for all students that were checked between 2012 and 2023.
Continue reading “Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation”Banks have a requirement to ‘know their customers’ and to look for money laundering and the financing of terrorism. Their vigilante efforts lead to racist outcomes.
Continue reading “Racist Technology in Action: Anti-money laundering efforts by Dutch banks disproportionately affect people with a non-Western migration background”Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.