The Philippines is one of the countries that has more than two million people perform crowdwork, such as data annotation, according to informal government estimates.
Continue reading “Filipino workers in “digital sweatshops” train AI models for the West”The Best Algorithms Still Struggle to Recognize Black Faces
US government tests find even top-performing facial recognition systems misidentify blacks at rates 5 to 10 times higher than they do whites.
By Tom Simonite for WIRED on July 22, 2019
I tried the AI linkedin/curriculum picture generator and this was the result.
The pictures i gave for reference are simple selfies of my face ONLY. But still, the AI oversexualized me due to my features that have been fetishized for centuries. AI is biased for POC. I’m horrified.
By Lana Denina for Twitter on July 15, 2023
An MIT student asked AI to make her headshot more ‘professional.’ It gave her lighter skin and blue eyes.
Rona Wang, who is Asian American, said the AI gave her “features that made me look Caucasian.”
By Rona Wang and Spencer Buell for The Boston Globe on July 19, 2023
Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures
Black artists have been tinkering with machine learning algorithms in their artistic projects, surfacing many questions about the troubling relationship between AI and race, as reported in the New York Times.
Continue reading “Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures”Vooral vrouwen van kleur klagen de vooroordelen van AI aan
Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.
By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023
Civil society calls on EU to protect people’s rights in the AI Act ‘trilogue’ negotiations
As EU institutions start decisive meetings on the Artificial Intelligence (AI) Act, a broad civil society coalition is urging them to prioritise people and fundamental rights.
From European Digital Rights (EDRi) on July 12, 2023
Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities
Bloomberg’s researchers used Stable Diffusion to gauge the magnitude of biases in generative AI. Through an analysis of more than 5,000 images created by Stable Diffusion, they have found that it takes racial and gender disparities to extremes. The results are worse than those found in the real world.
Continue reading “Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities”Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History
Tech companies acknowledge machine-learning algorithms can perpetuate discrimination and need improvement.
By Zachary Small for The New York Times on July 4, 2023
AI translation is jeopardizing Afghan asylum claims
Afghan refugees’ asylum claims are being rejected because of bad AI translations of Pashto and Dari.
By Andrew Deck for Rest of World on April 19, 2023
Generative AI Takes Stereotypes and Bias From Bad to Worse
Text-to-image models amplify stereotypes about race and gender — here’s why that matters.
By Dina Bass and Leonardo Nicoletti for Bloomberg on June 1, 2023
Representing skin tone, or Google’s hubris versus the simplicity of Crayola
Google wants to “help computers ‘see’ our world”, and one of their ways of battling how current AI and machine learning systems perpetuate biases is to introduce a more inclusive scale of skin tone, the ‘Monk Skin Tone Scale’.
Continue reading “Representing skin tone, or Google’s hubris versus the simplicity of Crayola”Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem
In this eloquent and haunting piece by Hito Steyerl, she weaves the ongoing narratives of the eugenicist history of statistics with its integration into machine learning. She elaborates why the attempts to eliminate bias in facial recognition technology through diversifying datasets obscures the root of the problem: machine learning and automation are fundamentally reliant on extracting and exploiting human labour.
Continue reading “Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem”Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people
If this title feels like a deja-vu it is because you most likely have, in fact, seen this before (perhaps even in our newsletter). It was back in 2015 that the controversy first arose when Google released image recognition software that kept mislabelling Black people as gorillas (read here and here).
Continue reading “Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people”Tech Elite’s AI Ideologies Have Racist Foundations, Say AI Ethicists
More and more prominent tech figures are voicing concerns about superintelligent AI and risks to the future of humanity. But as leading AI ethicist Timnit Gebru and researcher Émile P Torres point out, these ideologies have deeply racist foundations.
By Samara Linton for POCIT on May 24, 2023
On Race, AI, and Representation Or, Why Democracy Now Needs To Redo Its June 1 Segment
On June 1, Democracy Now featured a roundtable discussion hosted by Amy Goodman and Nermeen Shaikh, with three experts on Artificial Intelligence (AI), about their views on AI in the world. They included Yoshua Bengio, a computer scientist at the Université de Montréal, long considered a “godfather of AI,” Tawana Petty, an organiser and Director of Policy at the Algorithmic Justice League (AJL), and Max Tegmark, a physicist at the Massachusetts Institute of Technology. Recently, the Future of Life Institute, of which Tegmark is president, issued an open letter “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Bengio is a signatory on the letter (as is Elon Musk). The AJL has been around since 2016, and has (along with other organisations) been calling for a public interrogation of racialised surveillance technology, the use of police robots, and other ways in which AI can be directly responsible for bodily harm and even death.
By Yasmin Nair for Yasmin Nair on June 3, 2023
GPT detectors are biased against non-native English writers
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.
By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023
Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s.
Eight years after a controversy over Black people being mislabeled as gorillas by image analysis software — and despite big advances in computer vision — tech giants still fear repeating the mistake.
By Kashmir Hill and Nico Grant for The New York Times on May 22, 2023
‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases
The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI. But after decades working with technology companies, she knows all too much about discrimination.
By John Harris for The Guardian on May 22, 2023
Skin Tone Research @ Google
Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Consensus and subjectivity of skin tone annotation for ML fairness
Skin tone is an observable characteristic that is subjective, perceived differently by individuals (e.g., depending on their location or culture) and thus is complicated to annotate. That said, the ability to reliably and accurately annotate skin tone is highly important in computer vision. This became apparent in 2018, when the Gender Shades study highlighted that computer vision systems struggled to detect people with darker skin tones, and performed particularly poorly for women with darker skin tones. The study highlights the importance for computer researchers and practitioners to evaluate their technologies across the full range of skin tones and at intersections of identities. Beyond evaluating model performance on skin tone, skin tone annotations enable researchers to measure diversity and representation in image retrieval systems, dataset collection, and image generation. For all of these applications, a collection of meaningful and inclusive skin tone annotations is key.
By Candice Schumann and Gbolahan O. Olanubi for Google AI Blog on May 15, 2023
Mean Images
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter
The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
By Angelina McMillan-Major, Emily M. Bender, Margaret Mitchell and Timnit Gebru for DAIR on March 31, 2023
AI hype: Unbearably white and male
In this (rightly) scathing article by Andrea Grimes, she denounces the “AI” hype by tech bros perpetuated by much of the mainstream media coverage on large language models, ChatGPT, and GPT4.
Continue reading “AI hype: Unbearably white and male”Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them
The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.
By Meredith Whittaker and Wilfred Chan for Fast Company on May 5, 2023
Is AI het buskruit van de 21ste eeuw? ‘Er zijn zeker parallellen’
Buskruit was een geweldig slimme uitvinding en kent goede én slechte toepassingen. Zullen we later op dezelfde manier naar kunstmatige intelligentie kijken?
By Claes de Vreese, Hind Dekker-Abdulaziz, Ilyaz Nasrullah, Martijn Bertisen, Nienke Schipper and Oumaima Hajri for Trouw on May 2, 2023
These robots were trained on AI. They became racist and sexist.
As billions flow into robotics, researchers who conducted the study are concerned about the effects this might have on society.
By Pranshu Verma for Washington Post on July 16, 2022
The Unbearable White Maleness of AI
Tech pundits presume artificial intelligence is something you either conquer or succumb to. But they’re looking at it all wrong.
By Andrea Grimes for Dame Magazine on April 11, 2023
Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality
In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “stumbling zombie-like into a digital welfare dystopia.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.
By Joanna Redden for Parental social licence for data linkage for service intervention on October 5, 2022
Meta’s clampdown on Palestine speech is far from ‘unintentional’
A report validated Palestinian experiences of social media censorship in May 2021, but missed how those policies are biased by design.
By Marwa Fatafta for +972 Magazine on October 9, 2022
Hoe AI stigma’s van moslima’s versterkt en verspreidt
Je kunt tegenwoordig niet meer om AI heen. Of het nu om chatGPT gaat of om de app Lensa AI, wie zich in het digitale veld begeeft komt er vroeg of laat mee in aanraking. De balans opmaken op de vraag ‘is AI goed of slecht?’ is lastig, zeker omdat het nog niet zo wijdverbreid gebruikt wordt. Maar als we de experts mogen geloven, gaat dat in de toekomst anders zijn. De hoogste tijd voor de prijswinnende fotograaf Cigdem Yuksel om te onderzoeken wat het gebruik van AI betekent voor de beeldvorming van moslima’s. Lilith Magazine sprak met Yuksel en met Laurens Vreekamp, schrijver van the Art of AI.
By Aimée Dabekaussen, Cigdem Yuksel and Laurens Vreekamp for Lilith on April 6, 2023
Metaphors of AI: “Gunpowder of the 21st Century”
With the high pace development of AI systems, more and more people are trying to grapple with the potential impact of these systems on our societies and daily lives. One often utilized way to make sense of AI is through metaphors, that either help to clarify or horribly muddy the waters.
Continue reading “Metaphors of AI: “Gunpowder of the 21st Century””What problems are AI-systems even solving? “Apparently, too few people ask that question”
In this interview with Felienne Hermans, Professor Computer Science at the Vrije Universiteit Amsterdam, she discusses the sore lack of divesity in the white male-dominated world of programming, the importance of teaching people how to code and, the problematic uses of AI-systems.
Continue reading “What problems are AI-systems even solving? “Apparently, too few people ask that question””How AIs collapse our history and culture into a monolithic perspective
In this piece on Medium, Jenka Gurfinkel writes about a Reddit user who has asked Midjourney, a generative AI to do the following:
Continue reading “How AIs collapse our history and culture into a monolithic perspective”Imagine a time traveler journeyed to various times and places throughout human history and showed soldiers and warriors of the periods what a “selfie” is.
More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’
In an interview with Zoë Corbyn in the Guardian, data journalist and Associate Professor of Journalism, Meredith Broussard discusses her new book More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.
Continue reading “More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’”Computer-generated inclusivity: fashion turns to ‘diverse’ AI models
Fashion brands including Levi’s and Calvin Klein are having custom AI models created to ‘supplement’ representation in size, skin tone and age.
By Alaina Demopoulos for The Guardian on April 3, 2023
AI and the American Smile
How AI misrepresents culture through a facial expression.
By Jenka Gurfinkel for Medium on March 26, 2023
AI expert Meredith Broussard: ‘Racism, sexism and ableism are systemic problems’
The journalist and academic says the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal.
By Meredith Broussard and Zoë Corbyn for The Guardian on March 26, 2023
Hoogleraar computerwetenschappen vreest opmars AI: ‘Wilt u 50 euro extra betalen voor een mens? Toets 1’
Programmeren is een mannending, nog altijd. Hoogleraar computerwetenschappen Felienne Hermans wil daarin verandering brengen. Ondertussen ligt ze ’s nachts wakker van het brede arsenaal aan ellende dat nieuwe AI-toepassingen als ChatGPT teweegbrengen.
By Felienne Hermans and Laurens Verhagen for Volkskrant on March 16, 2023
Onze toekomst op de voorwaarden van Big Tech
Deze aflevering staat in het teken van het onderzoek The Public Interest vs. Big Tech. Dit gaat over het gedoe waar maatschappelijke organisaties mee te maken krijgen door de macht van grote techbedrijven over hun communicatie. Inge gaat in gesprek met Evelyn, Lotje en Ramla over dit onderzoek, dat we samen met vier burgerbewegingen en met Pilp (het Public Interest Litigation Project) deden. We bellen in met Oumaima Hajri. Zij heeft, in samenwerking met the Racism and Technology Center, een alliantie opgestart die zich inzet tegen de militarisering van AI.
By Evely Austin, Inge Wannet, Lotje Beek and Oumaima Hajri for Bits of Freedom on March 17, 2023