A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.
Continue reading “Racist technology in action: Cropping out the non-white”Google apologizes after its Vision AI produced racist results
A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.
By Nicolas Kayser-Bril for AlgorithmWatch on April 7, 2020
Some essential reading and research on race and technology
These resources are a starting point for the education that all responsible citizens should acquire about the intersection of race and technology.
From VentureBeat on June 2, 2020
Machine learning is a honeypot for phrenologists
When we say that “an algorithm is biased” we usually mean, “biased people made an algorithm.” This explains why so much machine learning prediction turns into phrenology.
By Cory Doctorow for Pluralistic on January 15, 2021
Timnit Gebru’s Exit From Google Exposes a Crisis in AI
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.
By Alex Hanna and Meredith Whittaker for WIRED on December 31, 2020
Hoe Nederland A.I. inzet voor etnisch profileren
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
How our data encodes systematic racism
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect.
By Deborah Raji for MIT Technology Review on December 10, 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says
The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
By Karen Hao for MIT Technology Review on December 4, 2020
‘There’s a chilling effect’: Google’s firing of leading AI ethicist spurs industry outrage
Timnit Gebru’s firing could damage Google’s reputation and ethical AI research within tech companies, industry leaders told Protocol.
By Anna Kramer for Protocol on December 3, 2020
Discriminating Systems: Gender, Race, and Power in AI
The diversity crisis in AI is well-documented and wide-reaching. It can be seen in unequal workplaces throughout industry and in academia, in the disparities in hiring and promotion, in the AI technologies that reflect and amplify biased stereotypes, and in the resurfacing of biological determinism in automated systems.
By Kate Crawford, Meredith Whittaker and Sarah Myers West for AI Now Institute on April 1, 2019
Designed to Deceive: Do These People Look Real to You?
The people in this story may look familiar, like ones you’ve seen on Facebook or Twitter or Tinder. But they don’t exist. They were born from the mind of a computer, and the technology behind them is improving at a startling pace.
By Kashmir Hill for The New York Times on November 21, 2020
Dataminr Targets Communities of Color for Police
Insiders say Dataminr’s “algorithmic” Twitter search involves human staffers perpetuating confirmation biases.
By Sam Biddle for The Intercept on October 21, 2020
Asymmetrical Power: The intransparency of the Dutch Police
In this interview with Jair Schalkwijk and Naomi Appelman, we try to bring some transparency to the use of facial recognition technologies in law enforcement.
By Margarita Osipian for The Hmm on October 8, 2020
Ja, gezichtsherkenningstechnologie discrimineert – maar een verbod is niet de oplossing
Zoals de dood van George Floyd leidde tot wereldwijde protesten, zo deed de vooringenomen beeldverwerkingstechnologie PULSE dat in de wetenschappelijke wereld. Er werd opgeroepen tot een verbod, maar neuro-informaticus Sennay Ghebreab vraagt zich af of een digitale beeldenstorm het probleem oplost.
By Sennay Ghebreab for Vrij Nederland on October 5, 2020
NIST benchmarks show facial recognition technology still struggles to identify Black faces
NIST benchmarks suggest some facial recognition algorithms haven’t corrected historic bias — and are actually getting worse.
By Kyle Wiggers for VentureBeat on September 9, 2020
Digital Ethics in Higher Education: 2020
New technologies, especially those relying on artificial intelligence or data analytics, are exciting but also present ethical challenges that deserve our attention and action. Higher education can and must lead the way.
By John O’Brien for EDUCAUSE Review on May 18, 2020
Down with (discriminating) systems
As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.
By Sarah Chander for European Digital Rights (EDRi) on September 2, 2020
Philosophers On GPT-3 (updated with replies by GPT-3)
Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On.
By Amanda Askell, Annette Zimmermann, C. Thi Nguyen, Carlos Montemayor, David Chalmers, GPT-3, Henry Shevlin, Justin Khoo, Regina Rini and Shannon Vallor for Daily Nous on July 30, 2020
Kunstmatig racisme
Terwijl het besef dat algoritmes niet heilig zijn overal begint door te dringen, gaan wij ze op zeer gevaarlijke manier gebruiken.
By Hasna El Maroudi for Joop on June 19, 2019
Friction-Free Racism
Surveillance capitalism turns a profit by making people more comfortable with discrimination.
By Chris Gilliard for Real Life on October 15, 2018