As I write this piece, an Israeli airstrike has hit makeshift tents near Al-Aqsa Martyrs Hospital in Deir al Balah, burning tents and people alive. The Israeli military bombed an aid distribution point in Jabalia, wounding 50 casualties who were waiting for flour. The entire north of Gaza has been besieged by the Israeli Occupying Forces for the past 10 days, trapping 400,000 Palestinians without food, drink, and medical supplies. Every day since last October, Israel, with the help of its western allies, intensifies its assault on Palestine, each time pushing the boundaries of what is comprehensible. There are no moral or legal boundaries Israel, and its allies, will not cross. The systematic ethnic cleansing of Palestine, which has been the basis of the settler-colonial Zionist project since its inception, has accelerated since 7th October 2023. From Palestine to Lebanon, Syria and Yemen, Israel and its allies continue their violence with impunity. Meanwhile, mainstream western news media are either silent in their reporting or complicit in abetting the ongoing destruction of the Palestinian people and the resistance.
Continue reading “Tech companies’ complicity in the ongoing genocide in Gaza and Palestine”Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts
In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.
Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”AI is bevooroordeeld. Wiens schuld is dat?
Ik ga in gesprek met Cynthia Liem. Zij is onderzoeker op het gebied van betrouwbare en verantwoorde kunstmatige intelligentie aan de TU Delft. Cynthia is bekend van haar analyse van de fraudedetectie-algoritmen die de Belastingdienst gebruikte in het toeslagenschandaal.
By Cynthia Liem and Ilyaz Nasrullah for BNR Nieuwsradio on October 20, 2023
Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem
In this eloquent and haunting piece by Hito Steyerl, she weaves the ongoing narratives of the eugenicist history of statistics with its integration into machine learning. She elaborates why the attempts to eliminate bias in facial recognition technology through diversifying datasets obscures the root of the problem: machine learning and automation are fundamentally reliant on extracting and exploiting human labour.
Continue reading “Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem”Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people
If this title feels like a deja-vu it is because you most likely have, in fact, seen this before (perhaps even in our newsletter). It was back in 2015 that the controversy first arose when Google released image recognition software that kept mislabelling Black people as gorillas (read here and here).
Continue reading “Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people”Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s.
Eight years after a controversy over Black people being mislabeled as gorillas by image analysis software — and despite big advances in computer vision — tech giants still fear repeating the mistake.
By Kashmir Hill and Nico Grant for The New York Times on May 22, 2023
Mean Images
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
Uber’s facial recognition is locking Indian drivers out of their accounts
Some drivers in India are finding their accounts permanently blocked. Better transparency of the AI technology could help gig workers.
By Varsha Bansal for MIT Technology Review on December 6, 2022
How Big Tech Is Importing India’s Caste Legacy to Silicon Valley
Graduates from the Indian Institutes of Technology are highly sought after by employers. They can also bring problems from home.
By Saritha Rai for Bloomberg on March 11, 2021
The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy
A government leader in Argentina hailed the AI, which was fed invasive data about girls. The feminist pushback could inform the future of health tech.
By Alexa Hagerty, Diego Jemio and Florencia Aranda for WIRED on February 16, 2022
ADCU initiates legal action against Uber’s workplace use of racially discriminatory facial recognition systems
ADCU has launched legal action against Uber over the unfair dismissal of a driver and a courier after the company’s facial recognition system failed to identify them.
By James Farrar, Paul Jennings and Yaseen Aslam for The App Drivers and Couriers Union on October 6, 2021
Big Tech is propped up by a globally exploited workforce
Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.
Continue reading “Big Tech is propped up by a globally exploited workforce”Race and Technology: A Research Lecture Series
Race and technology are closely intertwined, continuously influencing and reshaping one another. While algorithmic bias has received increased attention in recent years, it is only one of the many ways that technology and race intersect in computer science, public health, digital media, gaming, surveillance, and other domains. To build inclusive technologies that empower us all, we must understand how technologies and race construct one another and with what consequences.
From Microsoft
Tech companies poured 3.8 billion USD into racial justice, but to what avail?
The Plug and Fast Company looked at what happened to the 3.8 billion dollars that US-based tech companies committed to diversity, equity, and inclusion as their response to the Black Lives Matter protests.
Continue reading “Tech companies poured 3.8 billion USD into racial justice, but to what avail?”Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’
The AI researcher on how natural resources and human labour drive machine learning and the regressive stereotypes that are baked into its algorithms.
By Kate Crawford for The Guardian on June 6, 2021
Racist Technology in Action: Speech recognition systems by major tech companies are biased
From Siri, to Alexa, to Google Now, voice-based virtual assistants have increasingly become ubiquitous in our daily lives. So, it is unsurprising that yet another AI technology – speech recognition systems – has been reported to be biased against black people.
Continue reading “Racist Technology in Action: Speech recognition systems by major tech companies are biased”Data-Informed Predictive Policing Was Heralded As Less Biased. Is It?
Critics say it merely techwashes injustice.
By Annie Gilbertson for The Markup on August 20, 2020