Banks have a requirement to ‘know their customers’ and to look for money laundering and the financing of terrorism. Their vigilante efforts lead to racist outcomes.
Continue reading “Racist Technology in Action: Anti-money laundering efforts by Dutch banks disproportionately affect people with a non-Western migration background”Racist Technology in Action: No emojis feature hairstyles typically worn by Black people
None of the nearly 4,000 emojis feature afro hairstyles, grossly misrepresenting a large part of our world. Rise.365, a British community support group is trying to change that.
Continue reading “Racist Technology in Action: No emojis feature hairstyles typically worn by Black people”Racist Technology in Action: Michigan car insurers are allowed to charge a higher premium in Black neighbourhoods
An investigation by The Markup and Outlier Media shows how the law in Michigan allows car insurers to take location into account when deciding on a premium, penalizing the state’s Black population.
Continue reading “Racist Technology in Action: Michigan car insurers are allowed to charge a higher premium in Black neighbourhoods”Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts
In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.
Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”Racist Technology in Action: MyLife.com and discriminatory predation
MyLife.com is one of those immoral American companies that collect personal information to sell onwards as profiles on the one hand, while at the same suggesting to the people that are being profiled that incriminating information about them exists online that they can get removed by buying a subscription (that then does nothing and auto-renews in perpetuity).
Continue reading “Racist Technology in Action: MyLife.com and discriminatory predation”Racist Technology in Action: Autocorrect is Western- and White-focused
The “I am not a typo” campaign is asking the tech giants to update their name dictionaries and stop autocorrecting the 41% of names given to babies in England and Wales.
Continue reading “Racist Technology in Action: Autocorrect is Western- and White-focused”Racist Technology in Action: Outsourced labour in Nigeria is shaping AI English
Generative AI uses particular English words way more than you would expect. Even though it is impossible to know for sure that a particular text was written by AI (see here), you can say something about that in aggregate.
Continue reading “Racist Technology in Action: Outsourced labour in Nigeria is shaping AI English”Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders
In 2020, two NGOs finally forced the UK Home Office’s hand, compelling it to abandon its secretive and racist algorithm for sorting visitor visa applications. Foxglove and The Joint Council for the Welfare of Immigrants (JCWI) had been battling the algorithm for years, arguing that it is a form of institutionalized racism and calling it “speedy boarding for white people.”
Continue reading “Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders”Racist Technology in Action: ChatGPT detectors are biased against non-native English writers
Students are using ChatGPT for writing their essays. Antiplagiarism tools are trying to detect whether a text was written by AI. It turns out that these type of detectors consistently misclassify the text of non-native speakers as AI-generated.
Continue reading “Racist Technology in Action: ChatGPT detectors are biased against non-native English writers”Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents
Investigative reporting by The Markup showed how U.S. internet providers offer wildly different internet speeds for the same monthly fee. The neighbourhoods with the worst deals had lower median incomes and were very often the least White.
Continue reading “Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents”Racist Technology in Action: Meta systemically censors and silences Palestinian content globally
The censorship and silencing of Palestinian voices, and voices of those who support Palestine, is not new. However, since the escalation of Israel’s violence on the Gaza strip since 7 October 2023, the scale of censorship has significantly heightened, particular on social media platforms such as Instagram and Facebook. In December 2023, Human Rights Watch (HRW) released a 51-page report*, stating that Meta has engaged in systematic and global censorship of content related to Palestine since October 7th.
Continue reading “Racist Technology in Action: Meta systemically censors and silences Palestinian content globally”Racist Technology in Action: Generative/ing AI Bias
By now we know that generative image AI reproduces and amplifies sexism, racism, and other social systems of oppression. The latest example is of AI-generated stickers in WhatsApp that systematically depict Palestinian men and boys with rifles and guns.
Continue reading “Racist Technology in Action: Generative/ing AI Bias”Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands
This collaborative investigative effort by Spotlight Bureau, Lighthouse Reports and Follow the Money, dives into the story of a Moroccan-Dutch family in Veenendaal which was targeted for fraud by the Dutch government.
Continue reading “Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands”Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security
A system funded by the World Bank to assess who is most in need of support, is reported to not only be faulty but also discriminatory and depriving many of their right to social security. In a recent report titled “Automated Neglect: How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights” Human Rights Watch outlines how specifically the system used in Joran should be abandoned.
Continue reading “Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security”Racist Technology in Action: How Pokéman Go inherited existing racial inequities
When Aura Bogado was playing Pokémon Go in a much Whiter neighbourhood than the one where she lived, she noticed how many more PokéStops were suddenly available. She then crowdsourced locations of these stops and found out, with the Urban Institute think tank, that there were on average 55 PokéStops in majority White neighbourhoods and 19 in neighbourhoods that were majority Black.
Continue reading “Racist Technology in Action: How Pokéman Go inherited existing racial inequities”Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people
If this title feels like a deja-vu it is because you most likely have, in fact, seen this before (perhaps even in our newsletter). It was back in 2015 that the controversy first arose when Google released image recognition software that kept mislabelling Black people as gorillas (read here and here).
Continue reading “Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people”Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied
Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.
Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.
In another investigation by The Markup, significant racial disparities were found in the assessment system used by the Los Angeles Homeless Services Authority (LAHSA), the body responsible for coordinating homelessness services in Los Angeles. This assessment system is reliant on a tool, called the Vulnerability Index-Service Prioritisation Decision Assistance Tool, or VI-SPDAT, to score and assess whether people can qualify for subsidised permanent housing.
Continue reading “Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.”Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased
The algorithm that the city of Rotterdam used to predict the risk of welfare fraud fell into the hands of journalists. Turns out that the system was biased against marginalised groups like young mothers and people who don’t have Dutch as their first language.
Continue reading “Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased”Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs
This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.
Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course
Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”Racist Technology in Action: AI-generated image tools amplify harmful stereotypes
Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.
Continue reading “Racist Technology in Action: AI-generated image tools amplify harmful stereotypes”Racist Technology in Action: Robot rapper spouts racial slurs and the N-word
In this bizarre yet unsurprising example of an AI-gone-wrong (or “rogue”), an artificially designed robot rapper, FN Meka, has been dropped by Capitol Music Group for racial slurs and the use of the N-word in his music.
Continue reading “Racist Technology in Action: Robot rapper spouts racial slurs and the N-word”Racist Technology in Action: How hiring tools can be sexist and racist
One of the classic examples of how AI systems can reinforce social injustice is Amazon’s A.I. hiring tool. In 2014, Amazon built an ´A.I. powered´ tool to assess resumes and recommend the top candidates that would go on to be interviewed. However, the tool turned out to be very biased, systematically preferring men over women.
Continue reading “Racist Technology in Action: How hiring tools can be sexist and racist”Racist Technology in Action: Turning a Black person, White
An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.
Continue reading “Racist Technology in Action: Turning a Black person, White”Racist Techology in Action: Beauty is in the eye of the AI
Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”
Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias
The development and use of AI and machine learning in healthcare is proliferating. A 2020 study has shown that chest X-ray datasets that are used to train diagnostic models are biased against certain racial, gender and socioeconomic groups.
Continue reading “Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias”Racist Technology in Action: Oxygen meters designed for white skin
‘Oximeters’ are small medical devices used to measure levels of oxygen in someone’s blood. The oximeter can be clipped over someones finger and uses specific frequences of light beamed through the skin to measure the saturation of oxygen in the blood.
Continue reading “Racist Technology in Action: Oxygen meters designed for white skin”Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact
Traffic cameras that are used to automatically hand out speeding tickets don’t look at the colour of the person driving the speeding car. Yet, ProPublica has convincingly shown how cameras that don’t have a racial bias can still have a disparate racial impact.
Continue reading “Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact”Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success
An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.
Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers
This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.
Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”Racist Technology in Action: an AI for ethical advice turns out to be super racist
In mid October 2021, the Allen Institute for AI launched Delphi, an AI in the form of a research prototype that is designed “to model people’s moral judgments on a variety of everyday situations.” In simple words: they made a machine that tries to do ethics.
Continue reading “Racist Technology in Action: an AI for ethical advice turns out to be super racist”Racist Technology in Action: Facebook labels black men as ‘primates’
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.
Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”Racist Technology in Action: White preference in mortage-approval algorithms
A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”Racist Technology in Action: Apple’s emoji keyboard reinforces Western stereotypes
Time and time again, big tech companies have shown their ability and power to (mis)represent and (re)shape our digital world. From speech, to images, and most recently, to the emojis that we regularly use.
Continue reading “Racist Technology in Action: Apple’s emoji keyboard reinforces Western stereotypes”Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”Racist Technology in Action: Predicting future criminals with a bias against Black people
In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”Racist Technology in Action: Speech recognition systems by major tech companies are biased
From Siri, to Alexa, to Google Now, voice-based virtual assistants have increasingly become ubiquitous in our daily lives. So, it is unsurprising that yet another AI technology – speech recognition systems – has been reported to be biased against black people.
Continue reading “Racist Technology in Action: Speech recognition systems by major tech companies are biased”Racist Technology in Action: Amazon’s racist facial ‘Rekognition’
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.
Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”