Many AI bros are feverishly trying to attain what they call “Artificial General Intelligence” or AGI. In a piece on Medium, David Golumbia outlines connections between this pursuit of AGI and white supremacist thinking around “race science”.
Continue reading “White supremacy and Artificial General Intelligence”Racist Technology in Action: Outsourced labour in Nigeria is shaping AI English
Generative AI uses particular English words way more than you would expect. Even though it is impossible to know for sure that a particular text was written by AI (see here), you can say something about that in aggregate.
Continue reading “Racist Technology in Action: Outsourced labour in Nigeria is shaping AI English”TechScape: How cheap, outsourced labour in Africa is shaping AI English
Workers in Africa have been exploited first by being paid a pittance to help make chatbots, then by having their own words become AI-ese. Plus, new AI gadgets are coming for your smartphones.
By Alex Hern for The Guardian on April 16, 2024
The Great White Robot God
It may seem improbable at first glance to think that there might be connections between the pursuit of artificial general intelligence (AGI) and white supremacy. Yet the more you examine the question the clearer and more disturbing the links get.
By David Golumbia for David Golumbia on Medium on January 21, 2019
So, Amazon’s ‘AI-powered’ cashier-free shops use a lot of … humans. Here’s why that shouldn’t surprise you
This is how these bosses get rich: by hiding underpaid, unrecognised human work behind the trappings of technology, says the writer and artist James Bridle.
By James Bridle for The Guardian on April 10, 2024
OpenAI’s GPT sorts resumes with a racial bias
Bloomberg did a clever experiment: they had OpenAI’s GPT rank resumes and found that it shows a gender and racial bias just on the basis of the name of the candidate.
Continue reading “OpenAI’s GPT sorts resumes with a racial bias”OpenAI GPT Sorts Resume Names With Racial Bias, Test Shows
Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone.
By Davey Alba, Leon Yin, and Leonardo Nicoletti for Bloomberg on March 8, 2024
Google Used a Black, Deaf Worker to Tout Its Diversity. Now She’s Suing for Discrimination
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
What Luddites can teach us about resisting an automated future
Opposing technology isn’t antithetical to progress.
By Tom Humberstone for MIT Technology Review on February 28, 2024
LLMs become more covertly racist with human intervention
Researchers found that certain prejudices also worsened as models grew larger.
By James O’Donnell for MIT Technology Review on March 11, 2024
Gemini image generation got it wrong. We’ll do better.
An explanation of how the issues with Gemini’s image generation of people happened, and what we’re doing to fix it.
By Prabhakar Raghavan for The Keyword on February 23, 2024
Google’s Gemini problem will be even worse outside the U.S.
It’s hard to keep a stereotyping machine out of trouble.
By Russell Brandom for Rest of World on February 29, 2024
Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move
In a shallow attempt to do representation for representation’s sake, Google has managed to draw the ire of the right-wing internet by generating historically inaccurate and overly inclusive portraits of historical figures.
Continue reading “Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move”Racist Technology in Action: ChatGPT detectors are biased against non-native English writers
Students are using ChatGPT for writing their essays. Antiplagiarism tools are trying to detect whether a text was written by AI. It turns out that these type of detectors consistently misclassify the text of non-native speakers as AI-generated.
Continue reading “Racist Technology in Action: ChatGPT detectors are biased against non-native English writers”‘Vergeet de controlestaat, we leven in een controlemaatschappij’
Volgens bijzonder hoogleraar digitale surveillance Marc Schuilenburg hebben wij geen geheimen meer. Bij alles wat we doen kijkt er wel iets of iemand mee die onze gangen registreert. We weten het, maar doen er gewoon aan mee. Zo diep zit digitale surveillance in de haarvaten van onze samenleving: ‘We herkennen het vaak niet eens meer.’
By Marc Schuilenburg and Sebastiaan Brommersma for Follow the Money on February 4, 2024
Machine Learning and the Reproduction of Inequality
Machine learning is the process behind increasingly pervasive and often proprietary tools like ChatGPT, facial recognition, and predictive policing programs. But these artificial intelligence programs are only as good as their training data. When the data smuggle in a host of racial, gender, and other inequalities, biased outputs become the norm.
By Catherine Yeh and Sharla Alegria for SAGE Journals on November 15, 2023
Timnit Gebru says harmful AI systems need to be stopped
The labour movement has a vital role to play and will grow in importance in 2024, says Timnit Gebru of the Distributed AI Research Institute.
By Timnit Gebru for The Economist on November 13, 2023
‘Unmasking AI’ and the Fight for Algorithmic Justice
A conversation with Dr. Joy Buolamwini.
By Joy Buolamwini and Nabiha Syed for The Markup on November 18, 2023
This is how AI image generators see the world
Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.
By Kevin Schaul, Nitasha Tiku and Szu Yu Chen for Washington Post on November 20, 2023
Barbie and the dark side of generative artificial intelligence
As Barbie-mania grips the world, the peppy cultural icon deserves thanks for helping to illustrate a darker side of artificial intelligence.
By Paige Collings and Rory Mir for Salon on August 17, 2023
White faces generated by AI are more convincing than photos, finds survey
Photographs were seen as less realistic than computer images but there was no difference with pictures of people of colour.
By Nicola Davis for The Guardian on November 13, 2023
WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’
By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’.
By Johana Bhuiyan for The Guardian on November 3, 2023
AI is nog lang geen wondermiddel – zeker niet in het ziekenhuis
Tumoren ontdekken, nieuwe medicijnen ontwikkelen – beloftes genoeg over wat kunstmatige intelligentie kan betekenen voor de medische wereld. Maar voordat je zulk belangrijk werk kunt overlaten aan technologie, moet je precies snappen hoe die werkt. En zover zijn we nog lang niet.
By Maurits Martijn for De Correspondent on November 6, 2023
AI is bevooroordeeld. Wiens schuld is dat?
Ik ga in gesprek met Cynthia Liem. Zij is onderzoeker op het gebied van betrouwbare en verantwoorde kunstmatige intelligentie aan de TU Delft. Cynthia is bekend van haar analyse van de fraudedetectie-algoritmen die de Belastingdienst gebruikte in het toeslagenschandaal.
By Cynthia Liem and Ilyaz Nasrullah for BNR Nieuwsradio on October 20, 2023
Joy Buolamwini: “We’re giving AI companies a free pass”
The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems.
By Joy Buolamwini and Melissa Heikkilä for MIT Technology Review on October 29, 2023
Some image generators produce more problematic stereotypes than others, but all fail at diversity
Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.
By Naiara Bellio and Nicolas Kayser-Bril for AlgorithmWatch on November 2, 2023
AI was asked to create images of Black African docs treating white kids. How’d it go?
Researchers were curious if artificial intelligence could fulfill the order. Or would built-in biases short-circuit the request? Let’s see what an image generator came up with.
By Carmen Drahl for National Public Radio on October 6, 2023
Instagram apologises for adding ‘terrorist’ to some Palestinian user profiles
Parent company Meta says bug caused ‘inappropriate’ auto-translations and was now fixed while employee says it pushed ‘a lot of people over the edge’.
By Josh Taylor for The Guardian on October 20, 2023
Predictive Policing Software Terrible at Predicting Crimes
A software company sold a New Jersey police department an algorithm that was right less than 1 percent of the time.
By Aaron Sankin and Surya Mattu for WIRED on October 2, 2023
How AI reduces the world to stereotypes
Our research found that AI image generators show bias when tasked with imaging non-Western subjects.
By Victoria Turk for Rest of World on October 10, 2023
Equal love: Dating App Breeze seeks to address Algorithmic Discrimination
In a world where swiping left or right is the main route to love, whose profiles dating apps show you can change the course of your life.
Continue reading “Equal love: Dating App Breeze seeks to address Algorithmic Discrimination”Use of machine translation tools exposes already vulnerable asylum seekers to even more risks
The use of and reliance on machine translation tools in asylum seeking procedures has become increasingly common amongst government contractors and organisations working with refugees and migrants. This Guardian article highlights many of the issues documented by Respond Crisis Translation, a network of people who provide urgent interpretation services for migrants and refugees. The problems with machine translation tools occur throughout the asylum process, from border stations to detention centers to immigration courts.
Continue reading “Use of machine translation tools exposes already vulnerable asylum seekers to even more risks”Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?
In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.
Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”These new tools could make AI vision systems less biased
Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä for MIT Technology Review on September 25, 2023
Data Work and its Layers of (In)visibility
No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.
By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023
Filipino workers in “digital sweatshops” train AI models for the West
The Philippines is one of the countries that has more than two million people perform crowdwork, such as data annotation, according to informal government estimates.
Continue reading “Filipino workers in “digital sweatshops” train AI models for the West”The Best Algorithms Still Struggle to Recognize Black Faces
US government tests find even top-performing facial recognition systems misidentify blacks at rates 5 to 10 times higher than they do whites.
By Tom Simonite for WIRED on July 22, 2019
I tried the AI linkedin/curriculum picture generator and this was the result.
The pictures i gave for reference are simple selfies of my face ONLY. But still, the AI oversexualized me due to my features that have been fetishized for centuries. AI is biased for POC. I’m horrified.
By Lana Denina for Twitter on July 15, 2023
An MIT student asked AI to make her headshot more ‘professional.’ It gave her lighter skin and blue eyes.
Rona Wang, who is Asian American, said the AI gave her “features that made me look Caucasian.”
By Rona Wang and Spencer Buell for The Boston Globe on July 19, 2023
Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures
Black artists have been tinkering with machine learning algorithms in their artistic projects, surfacing many questions about the troubling relationship between AI and race, as reported in the New York Times.
Continue reading “Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures”