Racist Techology in Action: Beauty is in the eye of the AI

Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”

Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”

AI recognition of patient race in medical imaging: a modelling study

Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person’s race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient’s racial identity from medical images.

By Ananth Reddy Bhimireddy, Ayis T Pyrros, Brandon J. Price, Chima Okechukwu, Haoran Zhang, Hari Trivedi, Imon Banerjee, John L Burns, Judy Wawira Gichoya, Laleh Seyyed-Kalantari, Lauren Oakden-Rayner, Leo Anthony Celi, Li-Ching Chen, Lyle J. Palmer, Marzyeh Ghassemi, Matthew P Lungren, Natalie Dullerud, Ramon Correa, Ryan Wang, Saptarshi Purkayastha, Shih-Cheng Huang Po-Chih Kuo and Zachary Zaiman for The Lancet on May 11, 2022

Don’t miss this 4-part journalism series on ‘AI Colonialism’

The MIT Technology Review has written a four-part series on how the impact of AI is “repeating the patterns of colonial history.” The Review is careful not to directly compare the current situation with the colonialist capturing of land, extraction of resources, and exploitation of people. Yet, they clearly show that AI does further enrich the wealthy at the tremendous expense of the poor.

Continue reading “Don’t miss this 4-part journalism series on ‘AI Colonialism’”

Exploitative labour is central to the infrastructure of AI

In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.

Continue reading “Exploitative labour is central to the infrastructure of AI”

Massive Predpol leak confirms that it drives racist policing

When you or I seek out evidence to back up our existing beliefs and ignore the evidence that shows we’re wrong, it’s called “confirmation bias.” It’s a well-understood phenomenon that none of us are immune to, and thoughtful people put a lot of effort into countering it in themselves.

By Cory Doctorow for Pluralistic on December 2, 2021

Dutch Scientific Council knows: AI is neither neutral nor always rational

AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.

Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”

Opinion: Biden must act to get racism out of automated decision-making

Despite Biden’s announced commitment to advancing racial justice, not a single appointee to the task force has focused experience on civil rights and liberties in the development and use of AI. That has to change. Artificial intelligence, invisible but pervasive, affects vast swaths of American society and will affect many more. Biden must ensure that racial equity is prioritized in AI development.

By ReNika Moore for Washington Post on August 9, 2021

Discriminating Data

How big data and machine learning encode discrimination and create agitated clusters of comforting rage.

By Wendy Hui Kyong Chun for The MIT Press on November 1, 2021

Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’

Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.

By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019

Big Tech is propped up by a globally exploited workforce

Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.

Continue reading “Big Tech is propped up by a globally exploited workforce”

If AI is the problem, is debiasing the solution?

The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.

By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021

Kunst als stok tussen de digitale spaken – Kunstenaars laten zien dat technologie niet neutraal is

Mijn achternaam draag ik met trots: Ibrahim is de voornaam van mijn overgrootopa, die mijn vader invulde toen hij in de jaren ’70 vanuit Egypte naar Nederland kwam. Mijn vader is helaas overleden, het was de meest warme en hartelijke man die je je kan inbeelden. Maar er is iets geks aan de hand met mijn achternaam: als een computer een taal leert met behulp van alledaagse teksten van internet, blijkt dat de computer niet-westerse namen zoals Ibrahim als minder ‘plezierig’ aanduidt dan westerse achternamen (Caliskan et al., 2017).

By Meldrid Ibrahim for Mister Motley on August 24, 2021

Are We Automating Racism?

Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?

From YouTube on March 31, 2021

Moses Namara

Working to break down the barriers keeping young Black people from careers in AI.

By Abby Ohlheiser for MIT Technology Review on June 30, 2021

Emma Pierson

She employs AI to get to the roots of health disparities across race, gender, and class.

By Neel V. Patel for MIT Technology Review on June 30, 2021

Human-in-the-loop is not the magic bullet to fix AI harms

In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.

Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”

AI and its hidden costs

In a recent interview with The Guardian, Kate Crawford discusses her new book, Atlas AI, that delves into the broader landscape of how AI systems work by canvassing the structures of production and material realities. One example is ImageNet, a massive training dataset created by researchers from Stanford, that is used to test whether object recognition algorithms are efficient. It was made by scraping photos and images across the web and hiring crowd workers to label them according to an outdated lexical database created in the 1980s.

Continue reading “AI and its hidden costs”

Racist Technology in Action: Predicting future criminals with a bias against Black people

In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.

Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑