After being ghosted by numerous recruiters during her unemployment, Aliyah Jones, a Black woman, decided to create a LinkedIn ‘catfish’ account under the name Emily Osborne, a blonde-haired, blue-eyed white woman eager to advance her career in graphic design. The only difference between ‘Emily’ and Jones? Their names and skin colour. Their work experience and capabilities were the same.
Continue reading “She had to catfish herself as a white woman to get a job: AI-mediated racism on LinkedIn and in recruiting”Racist Technology in Action: The algorithm that was supposed to match asylum seekers to places with jobs doesn’t work and is discriminatory
For many years and for many people, GeoMatch by the Immigration Policy Lab was a shining example of ‘AI for Good’: instead of using algorithms to find criminals or fraud, why don’t we use it to allocate asylum seekers to regions that give them the most job opportunities? Only the naive can be surprised that this didn’t work out as promised.
Continue reading “Racist Technology in Action: The algorithm that was supposed to match asylum seekers to places with jobs doesn’t work and is discriminatory”Google’s AI Nano Banana Pro accused of generating racialised ‘white saviour’ visuals
Research finds tool depicts white women surrounded by black children when prompted about humanitarian aid in Africa.
By Aisha Down for The Guardian on December 4, 2025
Big data belooft statushouder betere baankans, maar tegenovergestelde blijkt
Eind 2024 begon het Centraal Orgaan opvang asielzoekers (COA) met het testen van een algoritme. Vluchtelingen zouden daarmee makkelijker werk vinden in hun nieuwe woonplaats: een win-win voor iedereen. Maar het systeem werkt contraproductief en discrimineert. Hoewel er al grote zorgen waren, zette het COA de proef toch door.
By David Davidson and Evaline Schot for Follow the Money on December 3, 2025
When Face Recognition Doesn’t Know Your Face Is a Face
An estimated 100 million people live with facial differences. As face recognition tech becomes widespread, some say they’re getting blocked from accessing essential systems and services.
By Matt Burgess for WIRED on October 15, 2025
OpenAI is huge in India. Its models are steeped in caste bias.
India is OpenAI’s second-largest market, but ChatGPT and Sora reproduce caste stereotypes that harm millions of people.
By Nilesh Christopher for MIT Technology Review on October 1, 2025
Racist Technology in Action: The caste bias in large language models
The MIT Technology Review shows how the models of major AI companies, like OpenAI’s ChatGPT, reflect India’s caste bias.
Continue reading “Racist Technology in Action: The caste bias in large language models”AI-generated ‘poverty porn’ fake images being used by aid agencies
Exclusive: Pictures depicting the most vulnerable and poorest people are being used in social media campaigns in the sector, driven by concerns over consent and cost.
By Aisha Down for The Guardian on October 20, 2025
The entanglement of the (Dutch) government and Big Tech when it comes to AI
De Balie organised an evening with Madhumita Murgia about her recent book Code-Dependent. Our own Naomi Appelman was asked to reflect on the book.
Continue reading “The entanglement of the (Dutch) government and Big Tech when it comes to AI”Racist Technology in Action: OpenAI’s Sora Launch: Yet another racist generative AI
The Guardian reports that OpenAI’s new AI video generator, Sora 2, launched with a social feed feature that allows users to share their generated videos on social media platforms easily. Predictably, within hours, violent and racist videos generated through Sora flooded these platforms. Despite OpenAI claiming to have implemented safeguards and mitigating measures, the app generated videos depicting mass shootings, bomb scares, and fabricated war footage from Gaza and Myanmar, showing AI-generated children.
Continue reading “Racist Technology in Action: OpenAI’s Sora Launch: Yet another racist generative AI”What The Workday Lawsuit Reveals About AI Bias—And How To Prevent It
Workday’s AI bias lawsuit is a warning to employers: biased hiring algorithms are a legal risk. This article explores how to prevent AI from reinforcing workplace bias.
By Janice Gassam Asare for Forbes on June 23, 2025
Hoe krijg je AI vrouwvriendelijker?
Bijna alle AI-toepassingen hebben een voorkeur voor witte mannen. De Raad voor Europa riep onlangs op tot actie omdat AI discriminatie, vooroordelen en geweld tegen vrouwen in de hand werkt. Hoe zorg je dat kunstmatige intelligentie niet discrimineert?
By Marijn Heemskerk for Vrij Nederland on July 19, 2025
OpenAI launch of video app Sora plagued by violent and racist images: ‘The guardrails are not real’
Misinformation researchers say lifelike scenes could obfuscate truth and lead to fraud, bullying and intimidation.
By Dara Kerr for The Guardian on October 4, 2025
Drones could soon become more intrusive than ever
“Whole-body” biometrics are on their way.
From The Economist on August 13, 2025
California colleges spend millions to catch plagiarism and AI. Is the faulty tech worth it?
Colleges and universities renew Turnitin subscriptions year after year even though its flawed detectors are expensive and require students to let the company keep their papers forever.
By Tara García Mathewson for The Markup on June 26, 2025
UK border officials to use AI to verify ages of child asylum seekers
Trial of technology comes as official report warns existing system has been failing for at least a decade.
By Kiran Stacey for The Guardian on July 22, 2025
Google Worried It Couldn’t Control How Israel Uses Project Nimbus, Files Reveal
Internal Google documents show that the tech giant feared it wouldn’t be able to monitor how Israel might use its technology to harm Palestinians.
By Sam Biddle for The Intercept on May 12, 2025
“It’s not a representation of me”: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services
Recent advances in artificial intelligence (AI) speech generation and voice cloning technologies have produced naturalistic speech and accurate voice replication, yet their influence on sociotechnical systems across diverse accents and linguistic traits is not fully understood. This study evaluates two synthetic AI voice services (Speechify and ElevenLabs) through a mixed methods approach using surveys and interviews to assess technical performance and uncover how users’ lived experiences influence their perceptions of accent variations in these speech technologies. Our findings reveal technical performance disparities across five regional, English-language accents and demonstrate how current speech generation technologies may inadvertently reinforce linguistic privilege and accent-based discrimination, potentially creating new forms of digital exclusion. Overall, our study highlights the need for inclusive design and regulation by providing actionable insights for developers, policymakers, and organizations to ensure equitable and socially responsible AI speech technologies.
By Avijit Ghosh, Christo Wilson, Jeffrey Gleason, Sarah Elizabeth Gillespie, Shira Michel, and Sufi Kaur for ACM Digital Library on June 23, 2025
TIDEs: A Transgender and Nonbinary Community-Labeled Dataset and Model for Transphobia Identification in Digital Environments
Transphobic rhetoric is a prevalent problem on social media that existing platform policies fail to meaningfully address. As such, trans people often create or adopt technologies independent from (but deployed within) platforms that help them mitigate the effects of facing transphobia online. In this paper, we introduce TIDEs (Transphobia Identification in Digital Environments), a dataset and model for detecting transphobic speech to contribute to the growing space of trans technologies for content moderation. We outline care-centered data practices, a methodology for constructing and labeling datasets for hate speech classification, which we developed while working closely with trans and nonbinary data annotators. Our fine-tuned DeBERTa model succeeds at detecting several ideologically distinct types of transphobia, achieving an F1 score of 0.81. As a publicly available dataset and model, TIDEs can serve as the base for future trans technologies and research that confronts and addresses the problem of online transphobia. Our results suggest that downstream applications of TIDEs may be deployable for reducing online harm for trans people.
By Dallas Card, Eric Gilbert, Francesca Lameiro, Lavinia Dunagan, and Oliver Haimson for ACM Digital Library on June 23, 2025
Dangerous Criminals and Beautiful Prostitutes? Investigating Harmful Representations in Dutch Language Models
While language-based AI is becoming increasingly popular, ensuring that these systems are socially responsible is essential. Despite their growing impact, large language models (LLMs), the engines of many language-driven applications, remain largely in the black box. Concerns about LLMs reinforcing harmful representations are shared by academia, industries, and the public. In professional contexts, researchers rely on LLMs for computational tasks such as text classification and contextual prediction, during which the risk of perpetuating biases cannot be overlooked. In a broader society where LLM-powered tools are widely accessible, interacting with biased models can shape public perceptions and behaviors, potentially reinforcing problematic social issues over time. This study investigates harmful representations in LLMs, focusing on ethnicity and gender in the Dutch context. Through template-based sentence construction and model probing, we identified potentially harmful representations using both automated and manual content analysis at the lexical and sentence levels, combining quantitative measurements with qualitative insights. Our findings have important ethical, legal, and political implications, challenging the acceptability of such harmful representations and emphasizing the need for effective mitigation strategies.
By Claes de Vreese, Gabriela Trogrlic, Natali Helberger, and Zilin Lin for ACM Digital Library on June 23, 2025
Benchmarking Algorithmic Bias in Face Recognition
An Experimental Approach Using Synthetic Faces and Human Evaluation.
From hliang2 on April 27, 2023
New York City uses a secret Child Welfare Algorithm
New York City’s Administration for Children’s Services (ACS) has been secretly using an AI risk assessment system since 2018 to flag families for additional investigation. This Markup investigation reveals how this algorithm mainly affects families of colour and raises serious questions about algorithmic bias against racialised and poor families in child welfare.
Continue reading “New York City uses a secret Child Welfare Algorithm”Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster
Amsterdam officials’ technosolutionist way of thinking struck once again: they believed they could build technology that would prevent fraud while protecting citizens’ rights through their “Smart Check” AI system.
Continue reading “Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster”What does it mean for an algorithm to be “fair”?
Amsterdam’s struggles with its welfare fraud algorithm show us the stakes of deploying AI in situations that directly affect human lives.
By Eileen Guo and Hans de Zwart for MIT Technology Review on June 17, 2025
The NYC Algorithm Deciding Which Families Are Under Watch for Child Abuse
How a family’s neighborhood, age, and mental health might get their case a deeper look.
By Colin Lecher for The Markup on May 20, 2025
How we investigated Amsterdam’s attempt to build a ‘fair’ fraud detection model
Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.
By Amanda Silverman, Eileen Guo, Eva Constantaras, Gabriel Geiger, and Justin-Casimir Braun for Lighthouse Reports on June 11, 2025
Revealed: Israeli military creating ChatGPT-like tool using vast collection of Palestinian surveillance data
The powerful new AI model is designed to analyze intercepted communications – but experts say such systems can exacerbate biases and are prone to making mistakes.
By Harry Davies and Yuval Abraham for The Guardian on March 6, 2025
Navigeren in niemandsland
De Wetenschappelijke Adviesraad Politie (WARP) adviseert de korpschef van de politie over zeven urgente uitdagingen rondom digitalisering en AI in politiewerk.
From Wetenschappelijke Adviesraad Politie on June 5, 2025
Amsterdam wilde met AI de bijstand eerlijker en efficiënter maken. Het liep anders
Al vaker ging de overheid de mist in met algoritmes bedoeld om uitkeringsfraude te bestrijden. De gemeente Amsterdam wilde het allemaal anders doen, maar kwam erachter: een ethisch algoritme is een illusie.
By Hans de Zwart and Jeroen van Raalte for Trouw on June 6, 2025
For One Hilarious, Terrifying Day, Elon Musk’s Chatbot Lost Its Mind
Just don’t ask it about “white genocide.”
By Zeynep Tufekci for The New York Times on May 17, 2025
Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa
For a day or so, Musk’s Grok AI chatbot would add its belief in a “white genocide” in South Africa, by now a classic white supremacist fabrication, to nearly every answer it would give regardless of the question asked.
Continue reading “Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa”Google ‘playing with fire’ by acquiring Israeli company founded by Unit 8200 veterans
Google’s $32bn purchase of Israeli cloud security company Wiz is its most expensive ever acquisition.
By Areeb Ullah for Middle East Eye on March 20, 2025
Grok’s ‘white genocide’ glitch and the AI black box
Users asking Elon Musk’s Grok chatbot on X for information about baseball, HBO Max or even a cat playing in a sink received some … curious responses Wednesday.
By Derek Robertson for POLITICO on May 15, 2025
The Day Grok Told Everyone About ‘White Genocide’
What in the world just happened with Elon Musk’s chatbot?
By Ali Breland and Matteo Wong for The Atlantic on May 15, 2025
‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP
We all know that racist algorithms can harm people across many sectors, and healthcare is no exception. In a powerful commentary published in CellPress, Ferryman et al. argue that racism must be treated as a core ethical issue in healthcare AI, not merely a flaw to be patched after deployment.
Continue reading “‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP”US to revoke student visas over ‘pro-Hamas’ social media posts flagged by AI – report
State department launches AI-assisted reviews of accounts to look for what it perceives as Hamas supporters.
From The Guardian on March 7, 2025
Racist Technology in Action: An LA Times AI tool defending the KKK
The LA Times is adding AI-written alternative points of view to opinion columns, editorials, and commentary. To the surprise of no one, this AI then diminished the hate-drivenness of the Ku Klux Klan as a movement.
Continue reading “Racist Technology in Action: An LA Times AI tool defending the KKK”So the LA Times replaced me with an AI that defends the KKK
The LA Times is laying off and buying out staff, while introducing highly dubious AI tools. This is what automation looks like in 2025.
By Brian Merchant for Blood in the Machine on March 8, 2025
iPhone Dictation Feature Transcribes the Word ‘Racist’ as ‘Trump’
The company said it was working to fix the problem after iPhone users began reporting the issue.
By Eli Tan and Tripp Mickle for The New York Times on February 25, 2025
Can Humanity Survive AI?
With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.
By Garrison Lovely for Jacobin on January 22, 2025
