DUO is the Dutch organisation for administering student grants. It uses an algorithm to help them decide which students get a home visit to check for fraudulent behaviour. Turns out they basically only check students of colour, and they have no clue why.
Continue reading “Algorithm to help find fraudulent students turns out to be racist”World Bank / Jordan: Poverty Targeting Algorithms Harm Rights
An automated cash transfer program in Jordan developed with significant financing from the World Bank is undermined by errors, discriminatory policies, and stereotypes about poverty.
By Amos Toh for Human Rights Watch on June 13, 2023
An algorithm intended to reduce poverty might disqualify people in need
According to a new report by the Human Rights Watch, an algorithmic welfare distribution system funded by the World Bank unfairly and inaccurately quantifies poverty.
By Tate Ryan-Mosley for MIT Technology Review on June 13, 2023
De fraudejacht van Duo treft bijna alleen studenten met een migratieachtergrond
De jacht op vermeende fraudeurs door studiefinancieringverstrekker Duo treft bijna alleen studenten met een migratieachtergrond. Duo is zich van geen kwaad bewust en wil in september het aantal controles verviervoudigen.
By Anouk Kootstra, Bas Belleman and Belia Heilbron for De Groene Amsterdammer on June 21, 2023
On Race, AI, and Representation Or, Why Democracy Now Needs To Redo Its June 1 Segment
On June 1, Democracy Now featured a roundtable discussion hosted by Amy Goodman and Nermeen Shaikh, with three experts on Artificial Intelligence (AI), about their views on AI in the world. They included Yoshua Bengio, a computer scientist at the Université de Montréal, long considered a “godfather of AI,” Tawana Petty, an organiser and Director of Policy at the Algorithmic Justice League (AJL), and Max Tegmark, a physicist at the Massachusetts Institute of Technology. Recently, the Future of Life Institute, of which Tegmark is president, issued an open letter “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Bengio is a signatory on the letter (as is Elon Musk). The AJL has been around since 2016, and has (along with other organisations) been calling for a public interrogation of racialised surveillance technology, the use of police robots, and other ways in which AI can be directly responsible for bodily harm and even death.
By Yasmin Nair for Yasmin Nair on June 3, 2023
GPT detectors are biased against non-native English writers
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.
By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023
Opinie: ‘Niet alleen verdwijnende acceptgiro, maar ook slimme algoritmes vergroten digitale kloof’
Wie technologie enkel als vooruitgang ziet, vergeet een groep Nederlanders voor wie dat niet geldt. Dat zijn niet alleen mensen op leeftijd, zegt Aaron Mirck, maar ook slachtoffers van slimme algoritmes die de overheid gebruikt. Technologie is niet neutraal, betoogt hij.
By Aaron Mirck for Het Parool on May 27, 2023
Mean Images
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
Data & Society Announces the Launch of its Algorithmic Impact Methods Lab
Lab will advance assessments of AI systems in the public interest.
From Data & Society on May 10, 2023
Ethnic Profiling
Whistleblower reveals Netherlands’ use of secret and potentially illegal algorithm to score visa applicants.
By Ariadne Papagapitos, Carola Houtekamer, Crofton Black, Daniel Howden, Gabriel Geiger, Klaas van Dijken, Merijn Rengers and Nalinee Maleeyakul for Lighthouse Reports on April 24, 2023
Nederland worstelt niet met digitalisering, maar met discriminatie
Algoritmen: Steeds zijn het gemarginaliseerde groepen die vaker dan anderen worden geraakt door digitalisering, schrijven Evelyn Austin en Nadia Benaissa.
By Evely Austin and Nadia Benaissa for NRC on May 5, 2023
‘Pas op met deze visumaanvraag’, waarschuwt het algoritme dat discriminatie in de hand werkt. Het ministerie negeert kritiek
Visumbeleid: De papiermolen rond visumaanvragen detacheert Buitenlandse Zaken zo veel mogelijk naar buitenlandse bedrijven. Maar het risico op ongelijke behandeling door profilering van aanvragers blijft bestaan. Kritiek daarover van de interne privacy-toezichthouder, werd door het ministerie in de wind geslagen.
By Carola Houtekamer, Merijn Rengers and Nalinee Maleeyakul for NRC on April 23, 2023
Meta’s clampdown on Palestine speech is far from ‘unintentional’
A report validated Palestinian experiences of social media censorship in May 2021, but missed how those policies are biased by design.
By Marwa Fatafta for +972 Magazine on October 9, 2022
De zwarte doos van algoritmes: hoe discriminatie wordt geautomatiseerd
Evelyn dook deze week in de (allesbehalve mannelijke) wereld van glitch art, hebben we het over het algoritme dat jarenlang door de Gemeente Rotterdam is gebruikt om te voorspellen wie van de bijstandsgerechtigden zou kunnen knoeien met hun uitkering en bellen we in met podcast-kopstuk Lieven Heeremans.
By Evely Austin, Inge Wannet, Joran van Apeldoorn, Lieven Heeremans and Nadia Benaissa for Bits of Freedom on April 15, 2023
Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied
Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.
Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”How AIs collapse our history and culture into a monolithic perspective
In this piece on Medium, Jenka Gurfinkel writes about a Reddit user who has asked Midjourney, a generative AI to do the following:
Continue reading “How AIs collapse our history and culture into a monolithic perspective”Imagine a time traveler journeyed to various times and places throughout human history and showed soldiers and warriors of the periods what a “selfie” is.
More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’
In an interview with Zoë Corbyn in the Guardian, data journalist and Associate Professor of Journalism, Meredith Broussard discusses her new book More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.
Continue reading “More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’”Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.
In another investigation by The Markup, significant racial disparities were found in the assessment system used by the Los Angeles Homeless Services Authority (LAHSA), the body responsible for coordinating homelessness services in Los Angeles. This assessment system is reliant on a tool, called the Vulnerability Index-Service Prioritisation Decision Assistance Tool, or VI-SPDAT, to score and assess whether people can qualify for subsidised permanent housing.
Continue reading “Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.”‘Nieuw toeslagenschandaal’ ontstaat in de omgang van banken met moslims
Er moet onderzoek komen naar discriminatie van moslims door financiële instellingen, stelt de Nationaal Coördinator tegen Discriminatie en Racisme Rabin Baldewsingh. Hij waarschuwt voor een nieuw toeslagenschandaal.
By Rabin Baldewsingh and Somajeh Ghaeminia for Trouw on April 6, 2023
This Student Is Taking On ‘Biased’ Exam Software
Mandatory face-recognition tools have repeatedly failed to identify people with darker skin tones. One Dutch student is fighting to end their use.
By Morgan Meaker and Robin Pocornie for WIRED on April 5, 2023
AI expert Meredith Broussard: ‘Racism, sexism and ableism are systemic problems’
The journalist and academic says the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal.
By Meredith Broussard and Zoë Corbyn for The Guardian on March 26, 2023
Paneldiscussie over racisme in AI: ‘Kunstmatige intelligentie houdt ons een spiegel voor’
Hoe dragen algoritmen bij aan racisme? En wat zijn de gevolgen? Die vragen kwamen aan bod tijdens een paneldiscussie woensdagmiddag op Science Park. ‘We moeten een “safe space” creëren waarin bedrijven transparant durven te zijn zonder gelijk afgestraft te worden.’
By Sija van den Beukel for Folia on March 16, 2023
You Are Not a Parrot
You are not a parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.
By Elizabeth Weil and Emily M. Bender for New York Magazine on March 1, 2023
Work related to the Racism and Technology Center is getting media attention
The current wave of reporting on the AI-bubble has one advantage: it also creates a bit of space in the media to write about how AI reflects the existing inequities in our society.
Continue reading “Work related to the Racism and Technology Center is getting media attention”Denmark’s welfare fraud system reflects a deeply racist and exclusionary society
As part of a series of investigative reporting by Lighthouse Reports and WIRED, Gabriel Geiger has revealed some of the findings about the use of welfare fraud algorithms in Denmark. This comes in the trajectory of the increasing use of algorithmic systems to detect welfare fraud across European cities, or at least systems which are currently known.
Continue reading “Denmark’s welfare fraud system reflects a deeply racist and exclusionary society”Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased
The algorithm that the city of Rotterdam used to predict the risk of welfare fraud fell into the hands of journalists. Turns out that the system was biased against marginalised groups like young mothers and people who don’t have Dutch as their first language.
Continue reading “Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased”L.A.’s Scoring System for Subsidized Housing Gives Black and Latino People Experiencing Homelessness Lower Priority Scores
An investigation by The Markup found racial disparities in L.A.’s intake system for unhoused people.
By Colin Lecher and Maddy Varner for The Markup on February 28, 2023
Zo leerde een Rotterdams fraudealgoritme kwetsbare groepen te verdenken
Een algoritme waarmee de gemeente Rotterdam jarenlang bijstandsfraude voorspelde, rekende jonge moeders en mensen die slecht Nederlands spreken tot de hoogste risicogroepen. Zij hadden de grootste kans op een strenge controle door de gemeente. Dat blijkt uit onderzoek van Lighthouse Reports, Argos, Vers Beton en Follow the Money, waarbij journalisten voor het eerst een compleet fraudealgoritme in handen kregen.
By David Davidson and Tom Claessens for Follow the Money on March 6, 2023
Inside the Suspicion Machine
Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.
By Dhruv Mehrotra, Eva Constantaras, Gabriel Geiger, Htet Aung and Justin-Casimir Braun for WIRED on March 6, 2023
The Algorithm Addiction
Mass profiling system SyRI resurfaces in the Netherlands despite ban and landmark court ruling.
By Allart van der Woude, Daniel Howden, David Davidson, Evaline Schot, Gabriel Geiger, Judith Konijn, Ludo Hekman, Marc Hijink, May Bulman and Saskia Adriaens for Lighthouse Reports on December 20, 2022
Word embeddings quantify 100 years of gender and ethnic stereotypes
Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words. We demonstrate that word embeddings can be used as a powerful tool to quantify historical trends and social change. As specific applications, we develop metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Our framework opens up a fruitful intersection between machine learning and quantitative social science.
By Dan Jurafsky, James Zou, Londa Schiebinger and Nikhil Garg for PNAS on April 3, 2018
Quantifying bias in society with ChatGTP-like tools
ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.
Continue reading “Quantifying bias in society with ChatGTP-like tools”Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course
Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
By Melissa Heikkilä for MIT Technology Review on December 12, 2022
Twitter’s Picture Preview Feature is Both Racist and Colorist
In the past few years, Black people have been inundated with the ways that technology hates us just as much as white supremacy does (then again, technology is an extension and weapon of white supremacy in many ways).
By Lelia Hampton for Lelia Hampton on December 13, 2020
In arme wijken voorspelt de overheid nog altijd fraude
De overheid voorspelt na het verbod op het ‘sleepnet’ SyRI nog altijd fraude op adressen in sociaal-economisch zwakkere wijken. Argos en Lighthouse Reports deden onderzoek naar de methode, waarbij gemeenten en instanties als Belastingdienst, UWV en politie risicosignalen delen. ‘Dit gaat over een overheid die zoveel van je afweet dat die altijd iets kan vinden.’
By David Davidson and Saskia Adriaens for VPRO on December 20, 2022
Surveillance Tech Perpeptuates Police Abuse of Power
Among global movements to reckon with police powers, a new report from UK research group No Tech For Tyrants unveils how police use surveillance technology to abuse power around the world.
From No Tech for Tyrants on November 7, 2022
The devastating consequences of risk based profiling by the Dutch police
Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”
Continue reading “The devastating consequences of risk based profiling by the Dutch police”ExamSoft’s remote bar exam sparks privacy and facial recognition concerns
To administer bar exams in 20 different states next week, ExamSoft is using facial recognition and collecting the biometric data of legal professionals.
By Khari Johnson for VentureBeat on September 29, 2020
Hoogste tijd voor onderzoek institutioneel racisme gemeenten
Er moet een moratorium komen op het gebruik van algoritmes bij risicoprofilering, vindt Samira Rafaela, Europarlementariër van D66.
By Samira Rafaela for Binnenlands Bestuur on October 10, 2022