The MIT Technology Review has written a four-part series on how the impact of AI is “repeating the patterns of colonial history.” The Review is careful not to directly compare the current situation with the colonialist capturing of land, extraction of resources, and exploitation of people. Yet, they clearly show that AI does further enrich the wealthy at the tremendous expense of the poor.
Continue reading “Don’t miss this 4-part journalism series on ‘AI Colonialism’”Exploitative labour is central to the infrastructure of AI
In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.
Continue reading “Exploitative labour is central to the infrastructure of AI”The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy
A government leader in Argentina hailed the AI, which was fed invasive data about girls. The feminist pushback could inform the future of health tech.
By Alexa Hagerty, Diego Jemio and Florencia Aranda for WIRED on February 16, 2022
Don’t ask if artificial intelligence is good or fair, ask how it shifts power
Those who could be exploited by AI should be shaping its projects.
By Pratyusha Kalluri for Nature on July 7, 2020
Crime Prediction Keeps Society Stuck in the Past
So long as algorithms are trained on racist historical data and outdated values, there will be no opportunities for change.
By Chris Gilliard for WIRED on January 2, 2022
The Humanities Can’t Save Big Tech From Itself
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
For truly ethical AI, its research must be independent from big tech
We must curb the power of Silicon Valley and protect those who speak up about the harms of AI.
By Timnit Gebru for The Guardian on December 6, 2021
Google fired its star AI researcher one year ago. Now she’s launching her own institute
Timnit Gebru is launching Distributed Artificial Intelligence Research Institute (DAIR) to document AI’s harms on marginalized groups.
By Nitasha Tiku for Washington Post on December 2, 2021
Massive Predpol leak confirms that it drives racist policing
When you or I seek out evidence to back up our existing beliefs and ignore the evidence that shows we’re wrong, it’s called “confirmation bias.” It’s a well-understood phenomenon that none of us are immune to, and thoughtful people put a lot of effort into countering it in themselves.
By Cory Doctorow for Pluralistic on December 2, 2021
Civil society calls on the EU to put fundamental rights first in the AI Act
Today, 30 November 2021, European Digital Rights (EDRi) and 114 civil society organisations launched a collective statement to call for an Artificial Intelligence Act (AIA) which foregrounds fundamental rights.
From European Digital Rights (EDRi) on November 30, 2021
Dutch Scientific Council knows: AI is neither neutral nor always rational
AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.
Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”Racist Technology in Action: an AI for ethical advice turns out to be super racist
In mid October 2021, the Allen Institute for AI launched Delphi, an AI in the form of a research prototype that is designed “to model people’s moral judgments on a variety of everyday situations.” In simple words: they made a machine that tries to do ethics.
Continue reading “Racist Technology in Action: an AI for ethical advice turns out to be super racist”Opinion: Biden must act to get racism out of automated decision-making
Despite Biden’s announced commitment to advancing racial justice, not a single appointee to the task force has focused experience on civil rights and liberties in the development and use of AI. That has to change. Artificial intelligence, invisible but pervasive, affects vast swaths of American society and will affect many more. Biden must ensure that racial equity is prioritized in AI development.
By ReNika Moore for Washington Post on August 9, 2021
Revealed: the software that studies your Facebook friends to predict who may commit a crime
Voyager, which pitches its tech to police, has suggested indicators such as Instagram usernames that show Arab pride can signal inclination towards extremism.
By Johana Bhuiyan and Sam Levin for The Guardian on November 17, 2021
Discriminating Data
How big data and machine learning encode discrimination and create agitated clusters of comforting rage.
By Wendy Hui Kyong Chun for The MIT Press on November 1, 2021
Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist
Researchers at the Allen Institute for AI created Ask Delphi to make ethical judgments — but it turned out to be awfully bigoted and racist instead.
By Tony Tran for Futurism on October 22, 2021
AI projects to tackle racial inequality in UK healthcare, says Javid
Health secretary signs up to hi-tech schemes countering health disparities and reflecting minority ethnic groups’ data.
By Andrew Gregory for The Guardian on October 20, 2021
Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’
Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.
By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019
Why ‘debiasing’ will not solve racist AI
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”Big Tech is propped up by a globally exploited workforce
Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.
Continue reading “Big Tech is propped up by a globally exploited workforce”If AI is the problem, is debiasing the solution?
The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021
How Stereotyping and Bias Lingers in Product Design
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
How Artificial Intelligence Can Deepen Racial and Economic Inequities
The Biden administration must prioritize and address all the ways that AI and technology can exacerbate racial and other inequities.
By Olga Akselrod for American Civil Liberties Union (ACLU) on July 13, 2021
Facebook Apologizes After A.I. Puts ‘Primates’ Label on Video of Black Men
Facebook called it “an unacceptable error.” The company has struggled with other issues related to race.
By Ryan Mac for The New York Times on September 3, 2021
Kunst als stok tussen de digitale spaken – Kunstenaars laten zien dat technologie niet neutraal is
Mijn achternaam draag ik met trots: Ibrahim is de voornaam van mijn overgrootopa, die mijn vader invulde toen hij in de jaren ’70 vanuit Egypte naar Nederland kwam. Mijn vader is helaas overleden, het was de meest warme en hartelijke man die je je kan inbeelden. Maar er is iets geks aan de hand met mijn achternaam: als een computer een taal leert met behulp van alledaagse teksten van internet, blijkt dat de computer niet-westerse namen zoals Ibrahim als minder ‘plezierig’ aanduidt dan westerse achternamen (Caliskan et al., 2017).
By Meldrid Ibrahim for Mister Motley on August 24, 2021
Researchers find racial discrimination in ‘dynamic pricing’ algorithms used by Uber, Lyft, and others
A preprint study shows ride-hailing services like Uber, Lyft, and Via charge higher prices in certain neighborhoods based on racial and other biases.
By Kyle Wiggers for VentureBeat on June 12, 2020
Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Are We Automating Racism?
Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
From YouTube on March 31, 2021
Moses Namara
Working to break down the barriers keeping young Black people from careers in AI.
By Abby Ohlheiser for MIT Technology Review on June 30, 2021
Emma Pierson
She employs AI to get to the roots of health disparities across race, gender, and class.
By Neel V. Patel for MIT Technology Review on June 30, 2021
Human-in-the-loop is not the magic bullet to fix AI harms
In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.
Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”The False Comfort of Human Oversight as an Antidote to A.I. Harm
Humans are being tasked with overseeing algorithms that were put in place with the promise of augmenting human deficiencies.
By Amba Kak and Ben Green for Slate Magazine on June 15, 2021
Inside the fight to reclaim AI from Big Tech’s control
For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
AI and its hidden costs
In a recent interview with The Guardian, Kate Crawford discusses her new book, Atlas AI, that delves into the broader landscape of how AI systems work by canvassing the structures of production and material realities. One example is ImageNet, a massive training dataset created by researchers from Stanford, that is used to test whether object recognition algorithms are efficient. It was made by scraping photos and images across the web and hiring crowd workers to label them according to an outdated lexical database created in the 1980s.
Continue reading “AI and its hidden costs”Racist Technology in Action: Predicting future criminals with a bias against Black people
In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”The hidden work created by artificial intelligence programs
Successful and ethical artificial intelligence programs take into account behind-the-scenes ‘repair work’ and ‘ghost workers.’
By Sara Brown for MIT Sloan on May 4, 2021
Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’
The AI researcher on how natural resources and human labour drive machine learning and the regressive stereotypes that are baked into its algorithms.
By Kate Crawford for The Guardian on June 6, 2021
Demographic skews in training data create algorithmic errors
Women and people of colour are underrepresented and depicted with stereotypes.
From The Economist on June 5, 2021
Sentenced by Algorithm
Computer programs used to predict recidivism and determine prison terms have a high error rate, a secret design, and a demonstrable racial bias.
By Jed S. Rakoff for The New York Review of Books on June 10, 2021
Image classification algorithms at Apple, Google still push racist tropes
Automated systems from Apple and Google label characters with dark skins “Animals”.
By Nicolas Kayser-Bril for AlgorithmWatch on May 14, 2021