In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.
Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”The False Comfort of Human Oversight as an Antidote to A.I. Harm
Humans are being tasked with overseeing algorithms that were put in place with the promise of augmenting human deficiencies.
By Amba Kak and Ben Green for Slate Magazine on June 15, 2021
Inside the fight to reclaim AI from Big Tech’s control
For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
AI and its hidden costs
In a recent interview with The Guardian, Kate Crawford discusses her new book, Atlas AI, that delves into the broader landscape of how AI systems work by canvassing the structures of production and material realities. One example is ImageNet, a massive training dataset created by researchers from Stanford, that is used to test whether object recognition algorithms are efficient. It was made by scraping photos and images across the web and hiring crowd workers to label them according to an outdated lexical database created in the 1980s.
Continue reading “AI and its hidden costs”Racist Technology in Action: Predicting future criminals with a bias against Black people
In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”The hidden work created by artificial intelligence programs
Successful and ethical artificial intelligence programs take into account behind-the-scenes ‘repair work’ and ‘ghost workers.’
By Sara Brown for MIT Sloan on May 4, 2021
Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’
The AI researcher on how natural resources and human labour drive machine learning and the regressive stereotypes that are baked into its algorithms.
By Kate Crawford for The Guardian on June 6, 2021
Demographic skews in training data create algorithmic errors
Women and people of colour are underrepresented and depicted with stereotypes.
From The Economist on June 5, 2021
Sentenced by Algorithm
Computer programs used to predict recidivism and determine prison terms have a high error rate, a secret design, and a demonstrable racial bias.
By Jed S. Rakoff for The New York Review of Books on June 10, 2021
Image classification algorithms at Apple, Google still push racist tropes
Automated systems from Apple and Google label characters with dark skins “Animals”.
By Nicolas Kayser-Bril for AlgorithmWatch on May 14, 2021
EU’s new AI law risks enabling Orwellian surveillance states
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
Aiming for truth, fairness, and equity in your company’s use of AI
Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media, and more.
From Federal Trade Commission on April 19, 2021
Why EU needs to be wary that AI will increase racial profiling
This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.
By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021
Twitter will share how race and politics shape its algorithms
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
Rotterdam’s use of algorithms could lead to ethnic profiling
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”Black creators sue YouTube, alleging racial discrimination
Algorithm systematically removes their content or limits how much it can earn from advertising, they allege.
By Reed Albergotti for Washington Post on June 18, 2020
This is the EU’s chance to stop racism in artificial intelligence
As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
Europe’s artificial intelligence blindspot: Race
Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
What Happens When Our Faces Are Tracked Everywhere We Go?
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
Black voices bring much needed context to our data-driven society
GitHub is where people build software. More than 56 million people use GitHub to discover, fork, and contribute to over 100 million projects.
By Klint Finley for GitHub on February 18, 2021
The Dutch government’s love affair with ethnic profiling
In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.
Continue reading “The Dutch government’s love affair with ethnic profiling”Racist technology in action: Gun, or electronic device?

The answer to that question depends on your skin colour, apparently. An AlgorithmWatch reporter, Nicholas Kayser-Bril, conducted an experiment that went viral on Twitter, showing that Google Vision Cloud (a service which is based on a subset of AI known as “computer vision” that focuses on automated image labelling), labelled an image of a dark-skinned individual holding a thermometer with the word “gun”, whilst a lighter skinned individual was labelled holding an “electronic device”.
Continue reading “Racist technology in action: Gun, or electronic device?”Opinie: ‘Oeigoeren hebben niets aan inclusief gepraat op Amsterdamse universiteiten’
Wat kunnen universiteiten leren van Antoine Griezmann, schaduwspits van FC Barcelona? Mensenrechten zowel in woord als in daad nastreven, vinden Joshua B. Cohen en Assamaual Saidi, en daar is nog een wereld te winnen.
By Assamaual Saidi and Joshua B. Cohen for Het Parool on February 11, 2021
How the Racism Baked Into Technology Hurts Teens
Adolescents spend ever greater portions of their days online and are especially vulnerable to discrimination. That’s a worrying combination.
By Avriel Epps-Darling for The Atlantic on October 24, 2020
Google fires AI researcher Timnit Gebru
Google has fired AI researcher and ethicist Timnit Gebru after she wrote an email criticising Google’s policies around diversity while she struggled with her leadership to get a critical paper on AI published. This angered thousands of her former colleagues and academics. They pointed at the unequal treatment that Gebru received as a black woman and they were worried about the integrity of Google’s research.
Continue reading “Google fires AI researcher Timnit Gebru”Racist technology in action: Cropping out the non-white
A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.
Continue reading “Racist technology in action: Cropping out the non-white”Google apologizes after its Vision AI produced racist results
A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.
By Nicolas Kayser-Bril for AlgorithmWatch on April 7, 2020
Some essential reading and research on race and technology
These resources are a starting point for the education that all responsible citizens should acquire about the intersection of race and technology.
From VentureBeat on June 2, 2020
Machine learning is a honeypot for phrenologists
When we say that “an algorithm is biased” we usually mean, “biased people made an algorithm.” This explains why so much machine learning prediction turns into phrenology.
By Cory Doctorow for Pluralistic on January 15, 2021
Timnit Gebru’s Exit From Google Exposes a Crisis in AI
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.
By Alex Hanna and Meredith Whittaker for WIRED on December 31, 2020
Hoe Nederland A.I. inzet voor etnisch profileren
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
How our data encodes systematic racism
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect.
By Deborah Raji for MIT Technology Review on December 10, 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says
The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
By Karen Hao for MIT Technology Review on December 4, 2020
‘There’s a chilling effect’: Google’s firing of leading AI ethicist spurs industry outrage
Timnit Gebru’s firing could damage Google’s reputation and ethical AI research within tech companies, industry leaders told Protocol.
By Anna Kramer for Protocol on December 3, 2020
Discriminating Systems: Gender, Race, and Power in AI
The diversity crisis in AI is well-documented and wide-reaching. It can be seen in unequal workplaces throughout industry and in academia, in the disparities in hiring and promotion, in the AI technologies that reflect and amplify biased stereotypes, and in the resurfacing of biological determinism in automated systems.
By Kate Crawford, Meredith Whittaker and Sarah Myers West for AI Now Institute on April 1, 2019
Designed to Deceive: Do These People Look Real to You?
The people in this story may look familiar, like ones you’ve seen on Facebook or Twitter or Tinder. But they don’t exist. They were born from the mind of a computer, and the technology behind them is improving at a startling pace.
By Kashmir Hill for The New York Times on November 21, 2020
Dataminr Targets Communities of Color for Police
Insiders say Dataminr’s “algorithmic” Twitter search involves human staffers perpetuating confirmation biases.
By Sam Biddle for The Intercept on October 21, 2020
Asymmetrical Power: The intransparency of the Dutch Police
In this interview with Jair Schalkwijk and Naomi Appelman, we try to bring some transparency to the use of facial recognition technologies in law enforcement.
By Margarita Osipian for The Hmm on October 8, 2020
Ja, gezichtsherkenningstechnologie discrimineert – maar een verbod is niet de oplossing
Zoals de dood van George Floyd leidde tot wereldwijde protesten, zo deed de vooringenomen beeldverwerkingstechnologie PULSE dat in de wetenschappelijke wereld. Er werd opgeroepen tot een verbod, maar neuro-informaticus Sennay Ghebreab vraagt zich af of een digitale beeldenstorm het probleem oplost.
By Sennay Ghebreab for Vrij Nederland on October 5, 2020
NIST benchmarks show facial recognition technology still struggles to identify Black faces
NIST benchmarks suggest some facial recognition algorithms haven’t corrected historic bias — and are actually getting worse.
By Kyle Wiggers for VentureBeat on September 9, 2020