Amnesty takes a deep dive into the shameful racial and socio-economic discrimination against students in the DUO case (about which we’ve written here, here, and here) in their briefing titled Profiled Without Protection: Students in the Netherlands Hit By Discriminatory Fraud Detection System.
Continue reading “Amnesty demands the Dutch government stop using any and all risk profiling”Racist Technology in Action: AI tenant screening fails the ‘fairness’ test
SafeRent Solutions, an AI-powered tenant screening company, settled a lawsuit alleging that its algorithm disproportionately discriminated against Black and Hispanic renters and those relying on housing vouchers.
Continue reading “Racist Technology in Action: AI tenant screening fails the ‘fairness’ test”My Fight Against Algorithmic Bias
How can algorithms become fairer? An essay by scientist and speaker Robin Pocornie.
By Robin Pocornie for BASF on December 5, 2024
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach.
By Robert Booth for The Guardian on December 6, 2024
Sweden’s Suspicion Machine
Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented.
By Ahmed Abdigadir, Anna Tiberg, Daniel Howden, Eva Constantaras, Frederick Laurin, Gabriel Geiger, Henrik Malmsten, Iben Ljungmark, Justin-Casimir Braun, Sascha Granberg, and Thomas Molén for Lighthouse Reports on November 27, 2024
‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis
More than half of students are now using generative AI, casting a shadow over campuses as tutors and students turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system.
By Will Coldwell for The Guardian on December 15, 2024
Kabinet moet alle risicoprofilering stopzetten
DUO gebruikte een discriminerend risicoprofileringssysteem om studenten te selecteren voor controle op misbruik van de uitwonendenbeurs.
From Amnesty International (NL) on November 21, 2024
Gecontroleerde (oud-)studenten met uitwonendenbeurs krijgen geld terug
Het kabinet betaalt boetes en teruggevorderde studiefinanciering terug aan (oud-)studenten die de uitwonendenbeurs ontvingen. Dat doet het kabinet omdat in het selectieproces van de controles op de uitwonendenbeurs sprake was van indirecte discriminatie. Het bewijs dat bij deze controles is verkregen om te besluiten of iemand wel of niet uitwonend was, had niet gebruikt mogen worden. Dat maakt de besluiten onrechtmatig en deze worden daarom teruggedraaid, schrijft minister Bruins (Onderwijs, Cultuur en Wetenschap) aan de Tweede Kamer. Hij reserveert € 61 miljoen om de zaak recht te zetten.
From Dienst Uitvoering Onderwijs (DUO) on November 11, 2024
Dutch government’s toxic relation with using data to detect social welfare fraud
The latest episode in the twisted series titled ‘The Dutch government is wildly discriminatory, using citizen’s data to seek out social welfare fraud’ has just come out.
Continue reading “Dutch government’s toxic relation with using data to detect social welfare fraud”Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation
We’ve written twice before about the racist impact of DUO’s student fraud detection efforts. The Dutch government has now decided to pay back all the fines and the study financing they held back for all students that were checked between 2012 and 2023.
Continue reading “Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation”Racist Technology in Action: Anti-money laundering efforts by Dutch banks disproportionately affect people with a non-Western migration background
Banks have a requirement to ‘know their customers’ and to look for money laundering and the financing of terrorism. Their vigilante efforts lead to racist outcomes.
Continue reading “Racist Technology in Action: Anti-money laundering efforts by Dutch banks disproportionately affect people with a non-Western migration background”Falsely Flagged: The AI-Driven Discrimination Black Students Face
Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”In the Netherlands, algorithmic discrimination is everywhere according to the Dutch Data Protection Authority
In its 2023 annual report, the Autoriteit Persoonsgegevens (the Dutch Data Protection Authority) is dismayed by how much algorithmic discrimination it encounters while doing its oversight.
Continue reading “In the Netherlands, algorithmic discrimination is everywhere according to the Dutch Data Protection Authority”Racist Technology in Action: Michigan car insurers are allowed to charge a higher premium in Black neighbourhoods
An investigation by The Markup and Outlier Media shows how the law in Michigan allows car insurers to take location into account when deciding on a premium, penalizing the state’s Black population.
Continue reading “Racist Technology in Action: Michigan car insurers are allowed to charge a higher premium in Black neighbourhoods”Why Stopping Algorithmic Inequality Requires Taking Race Into Account
Let us explain. With cats
By Aaron Sankin and Natasha Uzcátegui-Liggett for The Markup on July 18, 2024
Overheid gebruikt nog steeds volop discriminerende algoritmes
Ondanks de toeslagenaffaire gingen vorig jaar overheidsorganisaties door met het gebruik van ‘ondoordachte algoritmes’, schrijft de Autoriteit Persoonsgegevens.
By Jeroen Piersma for Het Financieele Dagblad on July 2, 2024
Na het toeslagen schandaal, waarbij onder andere veel eenoudergezinnen en gezinnen met een migratieachtergrond onterecht van fraude werden beschuldigd, werd pijnlijk duidelijk dat niet alleen mensen discrimineren, maar algoritmes ook
Er werd beloofd dat deze systemen eerlijker zouden worden, maar uit het nieuwe jaarverslag van de Autoriteit Persoonsgegevens blijkt dat er sindsdien weinig is verbeterd. Algoritmes categoriseren mensen met bepaalde kenmerken nog steeds onterecht als risico. Noëlle Cecilia, medeoprichter van Brush AI (@ai.brush) was zondag te gast bij Mandy. Zij maakt algoritmes voor bedrijven en deed een jaar lang onderzoek naar de eerlijkheid en discriminatie ervan. Zij legt ons uit waarom de mindset moet veranderen bij het ontwikkelen van AI-systemen.
By Noëlle Cecilia for Instagram on July 9, 2024
How and why algorithms discriminate
Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.
By Pie Sombetzki for AlgorithmWatch on June 26, 2024
Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts
In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.
Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”Are you 80% angry and 2% sad? Why ‘emotional AI’ is fraught with problems
AI that purports to read our feelings may enhance user experience but concerns over misuse and bias mean the field is fraught with potential dangers.
By Ned Carter Miles for The Guardian on June 23, 2024
The datafication of race and ethnicity
The New York Times published a fascinating overview of the American census forms since the late 18th century. It shows how the form keeps trying to ‘capture’ the country’s demographics, “creating and reshaping the ever-changing views of racial and ethnic identity.”
Continue reading “The datafication of race and ethnicity”Podcast: Art as a prophetic activity for the future of AI
Our own Hans de Zwart was a guest in the ‘Met Nerds om Tafel’ podcast. With Karen Palmer (creator of Consensus Gentium, a film about surveillance that watches you back), they discussed the role of art and storytelling in getting us ready for the future.
Continue reading “Podcast: Art as a prophetic activity for the future of AI”Racist Technology in Action: MyLife.com and discriminatory predation
MyLife.com is one of those immoral American companies that collect personal information to sell onwards as profiles on the one hand, while at the same suggesting to the people that are being profiled that incriminating information about them exists online that they can get removed by buying a subscription (that then does nothing and auto-renews in perpetuity).
Continue reading “Racist Technology in Action: MyLife.com and discriminatory predation”Image generators are trying to hide their biases – and they make them worse
In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.
By Nicolas Kayser-Bril for AlgorithmWatch on May 29, 2024
Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department
Last year, Investico revealed how DUO, the Dutch organization for administering student grants, was using a racist algorithm to decide which students would get a home visit to check for fraudulent behaviour. The Minister of Education immediately stopped the use of the algorithm.
Continue reading “Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department”Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite
For more than a year now, the Dutch Ministry of Foreign Affairs has ignored advice from its experts and continued its use of discriminatory risk profiling of visa applicants.
Continue reading “Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite”Dutch Institute of Human Rights tells the government: “Test educational tools for possible discriminatory effects”
The Dutch Institute for Human Rights has commissioned research exploring the possible risks for discrimination and exclusion relating to the use of algorithms in education in the Netherlands.
Continue reading “Dutch Institute of Human Rights tells the government: “Test educational tools for possible discriminatory effects””Racist Technology in Action: Autocorrect is Western- and White-focused
The “I am not a typo” campaign is asking the tech giants to update their name dictionaries and stop autocorrecting the 41% of names given to babies in England and Wales.
Continue reading “Racist Technology in Action: Autocorrect is Western- and White-focused”Vervolgonderzoek bevestigt indirecte discriminatie controles uitwonendenbeurs
DUO heeft de onafhankelijke stichting Algorithm Audit vervolgonderzoek laten doen naar de manier waarop DUO tussen 2012 en 2023 controleerde of een student terecht studiefinanciering ontving voor uitwonende studenten of niet. De conclusies van het vervolgonderzoek bevestigen dat studenten met een migratieachtergrond hierbij indirect zijn gediscrimineerd.
From Dienst Uitvoering Onderwijs (DUO) on May 21, 2024
Fouten herstellen we later wel: hoe de gemeente een dubieus algoritme losliet op Rotterdammers
Het was te mooi om waar te zijn: een algoritme om fraude in de bijstand op te sporen. Ondanks waarschuwingen bleef de gemeente Rotterdam er bijna vier jaar lang in geloven. Een handjevol ambtenaren, zich onvoldoende bewust van ethische risico’s, kon jarenlang ongestoord experimenteren met de data van kwetsbare mensen.
By Romy van Dijk and Saskia Klaassen for Vers Beton on October 23, 2023
LET OP, zegt de computer van Buitenlandse Zaken bij tienduizenden visumaanvragen. Is dat discriminatie?
Discriminerend algoritme: Volgens een onderzoek discrimineerde het algoritme dat Buitenlandse Zaken gebruikt om visumaanvragen te beoordelen. Uit onvrede met die conclusie vroeg het ministerie om een second opinion.
By Carola Houtekamer and Merijn Rengers for NRC on May 1, 2024
OpenAI’s GPT sorts resumes with a racial bias
Bloomberg did a clever experiment: they had OpenAI’s GPT rank resumes and found that it shows a gender and racial bias just on the basis of the name of the candidate.
Continue reading “OpenAI’s GPT sorts resumes with a racial bias”Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders
In 2020, two NGOs finally forced the UK Home Office’s hand, compelling it to abandon its secretive and racist algorithm for sorting visitor visa applications. Foxglove and The Joint Council for the Welfare of Immigrants (JCWI) had been battling the algorithm for years, arguing that it is a form of institutionalized racism and calling it “speedy boarding for white people.”
Continue reading “Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders”OpenAI GPT Sorts Resume Names With Racial Bias, Test Shows
Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone.
By Davey Alba, Leon Yin, and Leonardo Nicoletti for Bloomberg on March 8, 2024
We’re Not Living a “Predicted” Life: Student Perspectives on Wisconsin’s Dropout Algorithm
Wisconsin took down its dropout predictions after a Markup investigation. Here’s what two students we featured have to say.
By Maurice Newton and Mia Townsend for The Markup on December 21, 2023
Borders and Bytes
So-called “smart” borders are just more sophisticated sites of racialized surveillance and violence. We need abolitionist tools to counter them.
By Ruha Benjamin for Inquest on February 13, 2024
The Allegheny Family Screening Tool’s Overestimation of Utility and Risk
A report on the Allegheny Family Screening Tool (pilot for predictive risk modeling in family policing) and its overestimation of utility and risk.
By Aaron Horowitz, Ana Gutierrez, Anjana Samant, Kath Xu, Marissa Gerchick, Noam Shemtov, Sophie Beiers, Tarak Shah, and Tobi Jegede for Logic on December 13, 2023
The child benefits scandal: no lessons learned
“It could happen again tomorrow” is one of the main devastating conclusions of the parlementary inquiry following the child benefits scandal.
Continue reading “The child benefits scandal: no lessons learned”Racist Technology in Action: ChatGPT detectors are biased against non-native English writers
Students are using ChatGPT for writing their essays. Antiplagiarism tools are trying to detect whether a text was written by AI. It turns out that these type of detectors consistently misclassify the text of non-native speakers as AI-generated.
Continue reading “Racist Technology in Action: ChatGPT detectors are biased against non-native English writers”Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents
Investigative reporting by The Markup showed how U.S. internet providers offer wildly different internet speeds for the same monthly fee. The neighbourhoods with the worst deals had lower median incomes and were very often the least White.
Continue reading “Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents”