On Race, AI, and Representation Or, Why Democracy Now Needs To Redo Its June 1 Segment

On June 1, Democracy Now featured a roundtable discussion hosted by Amy Goodman and Nermeen Shaikh, with three experts on Artificial Intelligence (AI), about their views on AI in the world. They included Yoshua Bengio, a computer scientist at the Université de Montréal, long considered a “godfather of AI,” Tawana Petty, an organiser and Director of Policy at the Algorithmic Justice League (AJL), and Max Tegmark, a physicist at the Massachusetts Institute of Technology. Recently, the Future of Life Institute, of which Tegmark is president, issued an open letter “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Bengio is a signatory on the letter (as is Elon Musk). The AJL has been around since 2016, and has (along with other organisations) been calling for a public interrogation of racialised surveillance technology, the use of police robots, and other ways in which AI can be directly responsible for bodily harm and even death.

By Yasmin Nair for Yasmin Nair on June 3, 2023

GPT detectors are biased against non-native English writers

The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.

By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023

Mean Images

An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.

By Hito Steyerl for New Left Review on April 28, 2023

Ethnic Profiling

Whistleblower reveals Netherlands’ use of secret and potentially illegal algorithm to score visa applicants.

By Ariadne Papagapitos, Carola Houtekamer, Crofton Black, Daniel Howden, Gabriel Geiger, Klaas van Dijken, Merijn Rengers and Nalinee Maleeyakul for Lighthouse Reports on April 24, 2023

‘Pas op met deze visumaanvraag’, waarschuwt het algoritme dat discriminatie in de hand werkt. Het ministerie negeert kritiek

Visumbeleid: De papiermolen rond visumaanvragen detacheert Buitenlandse Zaken zo veel mogelijk naar buitenlandse bedrijven. Maar het risico op ongelijke behandeling door profilering van aanvragers blijft bestaan. Kritiek daarover van de interne privacy-toezichthouder, werd door het ministerie in de wind geslagen.

By Carola Houtekamer, Merijn Rengers and Nalinee Maleeyakul for NRC on April 23, 2023

De zwarte doos van algoritmes: hoe discriminatie wordt geautomatiseerd

Evelyn dook deze week in de (allesbehalve mannelijke) wereld van glitch art, hebben we het over het algoritme dat jarenlang door de Gemeente Rotterdam is gebruikt om te voorspellen wie van de bijstandsgerechtigden zou kunnen knoeien met hun uitkering en bellen we in met podcast-kopstuk Lieven Heeremans.

By Evely Austin, Inge Wannet, Joran van Apeldoorn, Lieven Heeremans and Nadia Benaissa for Bits of Freedom on April 15, 2023

Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied

Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.

Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”

Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.

In another investigation by The Markup, significant racial disparities were found in the assessment system used by the Los Angeles Homeless Services Authority (LAHSA), the body responsible for coordinating homelessness services in Los Angeles. This assessment system is reliant on a tool, called the Vulnerability Index-Service Prioritisation Decision Assistance Tool, or VI-SPDAT, to score and assess whether people can qualify for subsidised permanent housing.

Continue reading “Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.”

You Are Not a Parrot

You are not a parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

By Elizabeth Weil and Emily M. Bender for New York Magazine on March 1, 2023

Denmark’s welfare fraud system reflects a deeply racist and exclusionary society

As part of a series of investigative reporting by Lighthouse Reports and WIRED, Gabriel Geiger has revealed some of the findings about the use of welfare fraud algorithms in Denmark. This comes in the trajectory of the increasing use of algorithmic systems to detect welfare fraud across European cities, or at least systems which are currently known.

Continue reading “Denmark’s welfare fraud system reflects a deeply racist and exclusionary society”

Zo leerde een Rotterdams fraudealgoritme kwetsbare groepen te verdenken

Een algoritme waarmee de gemeente Rotterdam jarenlang bijstandsfraude voorspelde, rekende jonge moeders en mensen die slecht Nederlands spreken tot de hoogste risicogroepen. Zij hadden de grootste kans op een strenge controle door de gemeente. Dat blijkt uit onderzoek van Lighthouse Reports, Argos, Vers Beton en Follow the Money, waarbij journalisten voor het eerst een compleet fraudealgoritme in handen kregen.

By David Davidson and Tom Claessens for Follow the Money on March 6, 2023

Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

By Dhruv Mehrotra, Eva Constantaras, Gabriel Geiger, Htet Aung and Justin-Casimir Braun for WIRED on March 6, 2023

The Algorithm Addiction

Mass profiling system SyRI resurfaces in the Netherlands despite ban and landmark court ruling.

By Allart van der Woude, Daniel Howden, David Davidson, Evaline Schot, Gabriel Geiger, Judith Konijn, Ludo Hekman, Marc Hijink, May Bulman and Saskia Adriaens for Lighthouse Reports on December 20, 2022

Word embeddings quantify 100 years of gender and ethnic stereotypes

Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words. We demonstrate that word embeddings can be used as a powerful tool to quantify historical trends and social change. As specific applications, we develop metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Our framework opens up a fruitful intersection between machine learning and quantitative social science.

By Dan Jurafsky, James Zou, Londa Schiebinger and Nikhil Garg for PNAS on April 3, 2018

Quantifying bias in society with ChatGTP-like tools

ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.

Continue reading “Quantifying bias in society with ChatGTP-like tools”

Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course

Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.

Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”

In arme wijken voorspelt de overheid nog altijd fraude

De overheid voorspelt na het verbod op het ‘sleepnet’ SyRI nog altijd fraude op adressen in sociaal-economisch zwakkere wijken. Argos en Lighthouse Reports deden onderzoek naar de methode, waarbij gemeenten en instanties als Belastingdienst, UWV en politie risicosignalen delen. ‘Dit gaat over een overheid die zoveel van je afweet dat die altijd iets kan vinden.’

By David Davidson and Saskia Adriaens for VPRO on December 20, 2022

The devastating consequences of risk based profiling by the Dutch police

Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”

Continue reading “The devastating consequences of risk based profiling by the Dutch police”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑