Massive Predpol leak confirms that it drives racist policing

When you or I seek out evidence to back up our existing beliefs and ignore the evidence that shows we’re wrong, it’s called “confirmation bias.” It’s a well-understood phenomenon that none of us are immune to, and thoughtful people put a lot of effort into countering it in themselves.

By Cory Doctorow for Pluralistic on December 2, 2021

Dutch Scientific Council knows: AI is neither neutral nor always rational

AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.

Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”

Opinion: Biden must act to get racism out of automated decision-making

Despite Biden’s announced commitment to advancing racial justice, not a single appointee to the task force has focused experience on civil rights and liberties in the development and use of AI. That has to change. Artificial intelligence, invisible but pervasive, affects vast swaths of American society and will affect many more. Biden must ensure that racial equity is prioritized in AI development.

By ReNika Moore for Washington Post on August 9, 2021

Discriminating Data

How big data and machine learning encode discrimination and create agitated clusters of comforting rage.

By Wendy Hui Kyong Chun for The MIT Press on November 1, 2021

Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’

Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.

By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019

Big Tech is propped up by a globally exploited workforce

Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.

Continue reading “Big Tech is propped up by a globally exploited workforce”

If AI is the problem, is debiasing the solution?

The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.

By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021

Kunst als stok tussen de digitale spaken – Kunstenaars laten zien dat technologie niet neutraal is

Mijn achternaam draag ik met trots: Ibrahim is de voornaam van mijn overgrootopa, die mijn vader invulde toen hij in de jaren ’70 vanuit Egypte naar Nederland kwam. Mijn vader is helaas overleden, het was de meest warme en hartelijke man die je je kan inbeelden. Maar er is iets geks aan de hand met mijn achternaam: als een computer een taal leert met behulp van alledaagse teksten van internet, blijkt dat de computer niet-westerse namen zoals Ibrahim als minder ‘plezierig’ aanduidt dan westerse achternamen (Caliskan et al., 2017).

By Meldrid Ibrahim for Mister Motley on August 24, 2021

Are We Automating Racism?

Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?

From YouTube on March 31, 2021

Moses Namara

Working to break down the barriers keeping young Black people from careers in AI.

By Abby Ohlheiser for MIT Technology Review on June 30, 2021

Emma Pierson

She employs AI to get to the roots of health disparities across race, gender, and class.

By Neel V. Patel for MIT Technology Review on June 30, 2021

Human-in-the-loop is not the magic bullet to fix AI harms

In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.

Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”

AI and its hidden costs

In a recent interview with The Guardian, Kate Crawford discusses her new book, Atlas AI, that delves into the broader landscape of how AI systems work by canvassing the structures of production and material realities. One example is ImageNet, a massive training dataset created by researchers from Stanford, that is used to test whether object recognition algorithms are efficient. It was made by scraping photos and images across the web and hiring crowd workers to label them according to an outdated lexical database created in the 1980s.

Continue reading “AI and its hidden costs”

Racist Technology in Action: Predicting future criminals with a bias against Black people

In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.

Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”

Sentenced by Algorithm

Computer programs used to predict recidivism and determine prison terms have a high error rate, a secret design, and a demonstrable racial bias.

By Jed S. Rakoff for The New York Review of Books on June 10, 2021

Why EU needs to be wary that AI will increase racial profiling

This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.

By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑