Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.
Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.
In another investigation by The Markup, significant racial disparities were found in the assessment system used by the Los Angeles Homeless Services Authority (LAHSA), the body responsible for coordinating homelessness services in Los Angeles. This assessment system is reliant on a tool, called the Vulnerability Index-Service Prioritisation Decision Assistance Tool, or VI-SPDAT, to score and assess whether people can qualify for subsidised permanent housing.
Continue reading “Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.”Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased
The algorithm that the city of Rotterdam used to predict the risk of welfare fraud fell into the hands of journalists. Turns out that the system was biased against marginalised groups like young mothers and people who don’t have Dutch as their first language.
Continue reading “Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased”Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs
This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.
Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course
Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”Racist Technology in Action: AI-generated image tools amplify harmful stereotypes
Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.
Continue reading “Racist Technology in Action: AI-generated image tools amplify harmful stereotypes”Racist Technology in Action: Robot rapper spouts racial slurs and the N-word
In this bizarre yet unsurprising example of an AI-gone-wrong (or “rogue”), an artificially designed robot rapper, FN Meka, has been dropped by Capitol Music Group for racial slurs and the use of the N-word in his music.
Continue reading “Racist Technology in Action: Robot rapper spouts racial slurs and the N-word”Racist Technology in Action: How hiring tools can be sexist and racist
One of the classic examples of how AI systems can reinforce social injustice is Amazon’s A.I. hiring tool. In 2014, Amazon built an ´A.I. powered´ tool to assess resumes and recommend the top candidates that would go on to be interviewed. However, the tool turned out to be very biased, systematically preferring men over women.
Continue reading “Racist Technology in Action: How hiring tools can be sexist and racist”Racist Technology in Action: Turning a Black person, White
An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.
Continue reading “Racist Technology in Action: Turning a Black person, White”Racist Techology in Action: Beauty is in the eye of the AI
Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”
Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias
The development and use of AI and machine learning in healthcare is proliferating. A 2020 study has shown that chest X-ray datasets that are used to train diagnostic models are biased against certain racial, gender and socioeconomic groups.
Continue reading “Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias”Racist Technology in Action: Oxygen meters designed for white skin
‘Oximeters’ are small medical devices used to measure levels of oxygen in someone’s blood. The oximeter can be clipped over someones finger and uses specific frequences of light beamed through the skin to measure the saturation of oxygen in the blood.
Continue reading “Racist Technology in Action: Oxygen meters designed for white skin”Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact
Traffic cameras that are used to automatically hand out speeding tickets don’t look at the colour of the person driving the speeding car. Yet, ProPublica has convincingly shown how cameras that don’t have a racial bias can still have a disparate racial impact.
Continue reading “Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact”Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success
An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.
Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers
This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.
Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”Racist Technology in Action: an AI for ethical advice turns out to be super racist
In mid October 2021, the Allen Institute for AI launched Delphi, an AI in the form of a research prototype that is designed “to model people’s moral judgments on a variety of everyday situations.” In simple words: they made a machine that tries to do ethics.
Continue reading “Racist Technology in Action: an AI for ethical advice turns out to be super racist”Racist Technology in Action: Facebook labels black men as ‘primates’
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.
Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”Racist Technology in Action: White preference in mortage-approval algorithms
A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”Racist Technology in Action: Apple’s emoji keyboard reinforces Western stereotypes
Time and time again, big tech companies have shown their ability and power to (mis)represent and (re)shape our digital world. From speech, to images, and most recently, to the emojis that we regularly use.
Continue reading “Racist Technology in Action: Apple’s emoji keyboard reinforces Western stereotypes”Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”Racist Technology in Action: Predicting future criminals with a bias against Black people
In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”Racist Technology in Action: Speech recognition systems by major tech companies are biased
From Siri, to Alexa, to Google Now, voice-based virtual assistants have increasingly become ubiquitous in our daily lives. So, it is unsurprising that yet another AI technology – speech recognition systems – has been reported to be biased against black people.
Continue reading “Racist Technology in Action: Speech recognition systems by major tech companies are biased”Racist Technology in Action: Amazon’s racist facial ‘Rekognition’
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.
Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”Racist technology in action: White only soap dispensers
In 2015, when T.J. Fitzpatrick visited a conference in Atlanta, he wasn’t able to use any of the soap dispensers in the bathroom.
Continue reading “Racist technology in action: White only soap dispensers”Racist technology in action: Gun, or electronic device?

The answer to that question depends on your skin colour, apparently. An AlgorithmWatch reporter, Nicholas Kayser-Bril, conducted an experiment that went viral on Twitter, showing that Google Vision Cloud (a service which is based on a subset of AI known as “computer vision” that focuses on automated image labelling), labelled an image of a dark-skinned individual holding a thermometer with the word “gun”, whilst a lighter skinned individual was labelled holding an “electronic device”.
Continue reading “Racist technology in action: Gun, or electronic device?”Racist technology in action: Cropping out the non-white
A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.
Continue reading “Racist technology in action: Cropping out the non-white”