New York City’s Administration for Children’s Services (ACS) has been secretly using an AI risk assessment system since 2018 to flag families for additional investigation. This Markup investigation reveals how this algorithm mainly affects families of colour and raises serious questions about algorithmic bias against racialised and poor families in child welfare.
Continue reading “New York City uses a secret Child Welfare Algorithm”Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster
Amsterdam officials’ technosolutionist way of thinking struck once again: they believed they could build technology that would prevent fraud while protecting citizens’ rights through their “Smart Check” AI system.
Continue reading “Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster”What does it mean for an algorithm to be “fair”?
Amsterdam’s struggles with its welfare fraud algorithm show us the stakes of deploying AI in situations that directly affect human lives.
By Eileen Guo and Hans de Zwart for MIT Technology Review on June 17, 2025
The NYC Algorithm Deciding Which Families Are Under Watch for Child Abuse
How a family’s neighborhood, age, and mental health might get their case a deeper look.
By Colin Lecher for The Markup on May 20, 2025
How we investigated Amsterdam’s attempt to build a ‘fair’ fraud detection model
Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.
By Amanda Silverman, Eileen Guo, Eva Constantaras, Gabriel Geiger, and Justin-Casimir Braun for Lighthouse Reports on June 11, 2025
Revealed: Israeli military creating ChatGPT-like tool using vast collection of Palestinian surveillance data
The powerful new AI model is designed to analyze intercepted communications – but experts say such systems can exacerbate biases and are prone to making mistakes.
By Harry Davies and Yuval Abraham for The Guardian on March 6, 2025
Amsterdam wilde met AI de bijstand eerlijker en efficiënter maken. Het liep anders
Al vaker ging de overheid de mist in met algoritmes bedoeld om uitkeringsfraude te bestrijden. De gemeente Amsterdam wilde het allemaal anders doen, maar kwam erachter: een ethisch algoritme is een illusie.
By Hans de Zwart and Jeroen van Raalte for Trouw on June 6, 2025
Navigeren in niemandsland
De Wetenschappelijke Adviesraad Politie (WARP) adviseert de korpschef van de politie over zeven urgente uitdagingen rondom digitalisering en AI in politiewerk.
From Wetenschappelijke Adviesraad Politie on June 5, 2025
For One Hilarious, Terrifying Day, Elon Musk’s Chatbot Lost Its Mind
Just don’t ask it about “white genocide.”
By Zeynep Tufekci for The New York Times on May 17, 2025
Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa
For a day or so, Musk’s Grok AI chatbot would add its belief in a “white genocide” in South Africa, by now a classic white supremacist fabrication, to nearly every answer it would give regardless of the question asked.
Continue reading “Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa”Google ‘playing with fire’ by acquiring Israeli company founded by Unit 8200 veterans
Google’s $32bn purchase of Israeli cloud security company Wiz is its most expensive ever acquisition.
By Areeb Ullah for Middle East Eye on March 20, 2025
Grok’s ‘white genocide’ glitch and the AI black box
Users asking Elon Musk’s Grok chatbot on X for information about baseball, HBO Max or even a cat playing in a sink received some … curious responses Wednesday.
By Derek Robertson for POLITICO on May 15, 2025
The Day Grok Told Everyone About ‘White Genocide’
What in the world just happened with Elon Musk’s chatbot?
By Ali Breland and Matteo Wong for The Atlantic on May 15, 2025
‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP
We all know that racist algorithms can harm people across many sectors, and healthcare is no exception. In a powerful commentary published in CellPress, Ferryman et al. argue that racism must be treated as a core ethical issue in healthcare AI, not merely a flaw to be patched after deployment.
Continue reading “‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP”US to revoke student visas over ‘pro-Hamas’ social media posts flagged by AI – report
State department launches AI-assisted reviews of accounts to look for what it perceives as Hamas supporters.
From The Guardian on March 7, 2025
Racist Technology in Action: An LA Times AI tool defending the KKK
The LA Times is adding AI-written alternative points of view to opinion columns, editorials, and commentary. To the surprise of no one, this AI then diminished the hate-drivenness of the Ku Klux Klan as a movement.
Continue reading “Racist Technology in Action: An LA Times AI tool defending the KKK”So the LA Times replaced me with an AI that defends the KKK
The LA Times is laying off and buying out staff, while introducing highly dubious AI tools. This is what automation looks like in 2025.
By Brian Merchant for Blood in the Machine on March 8, 2025
iPhone Dictation Feature Transcribes the Word ‘Racist’ as ‘Trump’
The company said it was working to fix the problem after iPhone users began reporting the issue.
By Eli Tan and Tripp Mickle for The New York Times on February 25, 2025
Can Humanity Survive AI?
With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.
By Garrison Lovely for Jacobin on January 22, 2025
Racist Technology in Action: Grok’s total lack of safeguards against generating racist content
Grok is the chatbot made by xAI, a startup founded by Elon Musk, and is the generative AI solution that is powering X (née Twitter). It has recently gained a new power to generate photorealistic images, including those of celebrities. This is a problem as its ‘guardrails’ are lacking: it willingly generates racist and other deeply problematic images.
Continue reading “Racist Technology in Action: Grok’s total lack of safeguards against generating racist content”‘Just the start’: X’s new AI software driving online racist abuse, experts warn
Amid reports of creation of fake racist images, Signify warns problem will get ‘so much worse’ over the next year.
By Raphael Boyd for The Guardian on January 13, 2025
Meta terminates its DEI programs days before Trump inauguration
Meta, fresh off announcement to end factchecking, follows McDonald’s and Walmart in rolling back diversity initiatives.
By Adria R Walker for The Guardian on January 10, 2025
She didn’t get an apartment because of an AI-generated score – and sued to help others avoid the same fate
Despite a stellar reference from a landlord of 17 years, Mary Louis was rejected after being screened by firm SafeRent
By Johana Bhuiyan for The Guardian on December 14, 2024
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach.
By Robert Booth for The Guardian on December 6, 2024
Why ‘open’ AI systems are actually closed, and why this matters
This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024
How this grassroots effort could make AI voices more diverse
A massive volunteer-led effort to collect training data in more languages, from people of more ages and genders, could help make the next generation of voice AI more inclusive and less exploitative.
By Melissa Heikkilä for MIT Technology Review on November 15, 2024
Sweden’s Suspicion Machine
Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented.
By Ahmed Abdigadir, Anna Tiberg, Daniel Howden, Eva Constantaras, Frederick Laurin, Gabriel Geiger, Henrik Malmsten, Iben Ljungmark, Justin-Casimir Braun, Sascha Granberg, and Thomas Molén for Lighthouse Reports on November 27, 2024
Ruha Benjamin on Eugenics 2.0
UC Berkeley recently discovered a fund established in 1975 to fund research into eugenics. Nowadays, our (avowed) perspective on this ideology has changed, so they repurposed the fund and commissioned a series on the legacies of eugenics for the LA Review of Books.
Continue reading “Ruha Benjamin on Eugenics 2.0”‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis
More than half of students are now using generative AI, casting a shadow over campuses as tutors and students turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system.
By Will Coldwell for The Guardian on December 15, 2024
The New Artificial Intelligentsia
In the fifth essay of the Legacies of Eugenics series, Ruha Benjamin explores how AI evangelists wrap their self-interest in a cloak of humanistic concern.
By Ruha Benjamin for Los Angeles Review of Books on October 18, 2024
AI vaak seksistisch en discriminerend: ‘Nóóít neutraler dan mens’
Slechts foto’s van mannen wanneer je ‘CEO’ opzoekt of gezichtsherkenning die niet werkt voor mensen van kleur: kunstmatige intelligentie is vaak seksistisch en discriminerend. Dat probleem vindt zijn oorsprong niet in AI, maar juist in de fysieke samenleving.
By Lisa O’Malley and Siri Beerends for Linda on September 24, 2024
Beyond Surveillance: The Case Against AI Proctoring & AI Detection
On September 18, 2024, as part of the BCcampus EdTech Sandbox Series, I presented my case against AI proctoring and AI detection. In this post you will learn about key points from my presentation and our discussion.
By Ian Linkletter for BCcampus on October 16, 2024
Falsely Flagged: The AI-Driven Discrimination Black Students Face
Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”Beyond Surveillance – The Case Against AI Detection and AI Proctoring
Are you an educator seeking a supportive space to critically examine AI surveillance tools? This workshop is for you. In an era where AI increasingly pervades education, AI detection and proctoring have sparked significant controversy. These tools, categorized as academic surveillance software, algorithmically monitor behaviour and movements. Students are increasingly forced to face them. Together, we will move beyond surveillance toward a culture of trust and transparency, shining a light on the black box of surveillance and discussing our findings. In this two-hour workshop, we will explore AI detection and proctoring through a 40-minute presentation, an hour of activities and discussion, and 20 minutes of group tool evaluation using a rubric.
By Ian Linkletter for BCcampus on September 18, 2024
Series: AI Colonialism
An investigation into how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.
From MIT Technology Review on April 19, 2022
Black Teens’ Schoolwork Twice As Likely To Be Falsely Flagged As AI-Generated
Black students are over twice as likely to be falsely accused of using AI tools to complete school assignments compared to their peers.
By Sara Keenan for POCIT on September 19, 2024
Het tempo waarop inheemse talen op dit moment verdwijnen is zorgelijk hoog
De helft van alle talen wordt momenteel met uitsterven bedreigd. De Sateré-Mawé in Brazilië willen dit voorkomen door hun taal te digitaliseren. Maar kan dit wel zonder Big Tech? En van wie is de taal eigenlijk?
By Sanne Bloemink for De Groene Amsterdammer on August 21, 2024
Why the AI revolution is leaving Africa behind
Large infrastructure gaps are creating a new digital divide.
From The Economist on July 25, 2024
AI zou in oorlogstijd burgerlevens sparen. In realiteit vallen er juist meer doden
Kunstmatige intelligentie zou ervoor zorgen dat er tijdens oorlogen minder burgerdoden vallen. In realiteit vallen er juist meer. Want waar mensen worden gereduceerd tot datapunten, voelt vuren al snel als objectief en correct.
By Lauren Gould, Linde Arentze, and Marijn Hoijtink for De Groene Amsterdammer on July 24, 2024
Why Stopping Algorithmic Inequality Requires Taking Race Into Account
Let us explain. With cats
By Aaron Sankin and Natasha Uzcátegui-Liggett for The Markup on July 18, 2024