ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.
Continue reading “Quantifying bias in society with ChatGTP-like tools”Alliance Against Military AI
Civil society organisations urge the Dutch government to immediately establish a moratorium on developing AI systems in the military domain.
By Oumaima Hajri for Alliantie tegen militaire AI on February 15, 2023
The Costs of Connection – How Data is Colonizing Human Life and Appropriating it for Capitalism
A profound exploration of how the ceaseless extraction of information about our intimate lives is remaking both global markets and our very selves. The Costs of Connection represents an enormous step forward in our collective understanding of capitalism’s current stage, a stage in which the final colonial input is the raw data of human life. Challenging, urgent and bracingly original.
By Nick Couldry and Ulises A. Mejias for Colonized by Data
Waarom ChatGPT geen racistische viespeuk is? Dankzij een stel Kenianen, voor twee dollar per uur
Je zult de populaire chatbot ChatGPT niet snel betrappen op vieze woordjes of racistische taal. Hij is keurig getraind door tientallen Kenianen. Hun taak: het algoritme leren vooral niet te beginnen over moord, marteling en verkrachting, zodat wij – de gebruikers – geen smerige drek voorgeschoteld krijgen.
By Maurits Martijn for De Correspondent on January 28, 2023
Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer
OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course
Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”AI Art Is Soft Propaganda for the Global North
Unsurprisingly, the artistic and ethical shortcomings of AI image generators are tied to their dependence on capital and capitalism.
By Marco Donnarumma for Hyperallergic on October 24, 2022
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
By Melissa Heikkilä for MIT Technology Review on December 12, 2022
ChatGPT, Galactica, and the Progress Trap
When large language models fall short, the consequences can be serious. Why is it so hard to acknowledge that?
By Abeba Birhane and Deborah Raji for WIRED on December 9, 2022
Whiteness of AI erases people of colour from our ‘imagined futures’, researchers argue
This is according to experts at the University of Cambridge, who suggest that current portrayals and stereotypes about AI risk creating a “racially homogenous”.
By Kanta Dihal and Stephen Cave for University of Cambridge on August 6, 2020
Meet The Former Black Twitter Workers Behind New Social Platform Spill
Social media app, Spill, designed by former Twitter employees, Alphonzo “Phonz” Terrell and DeVaris Brown, is becoming the chosen alternative for many.
By Kumba Kpakima for POCIT on December 21, 2022
Yes, ChatGPT is amazing and impressive. No, OpenAI has not come close to addressing the problem of bias
Filters appear to be bypassed with simple tricks, and superficially masked. And what is lurking inside is egregious.
By Steven t. Piantadosi for Twitter on December 4, 2022
Better Images of AI
We are a non-profit creating more realistic and inclusive images of artificial intelligence. Visit our growing repository available for anyone to use for free under CC licences, or just to use as inspiration for more helpful and diverse representations of AI.
From Better Images of AI
Researchers Find Stable Diffusion Amplifies Stereotypes
Current methods to mitigate these effects fail to prevent images perpetuating racist, misogynist and otherwise problematic stereotypes.
By Justin Hendrix for Tech Policy Press on November 9, 2022
Racist Technology in Action: AI-generated image tools amplify harmful stereotypes
Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.
Continue reading “Racist Technology in Action: AI-generated image tools amplify harmful stereotypes”New Meta AI demo writes racist and inaccurate scientific literature, gets pulled
Galactica language model generated convincing text about fact and nonsense alike.
By Benj Edwards for Ars Technica on November 18, 2022
Shocked SHOCKED that it only took a handful of questions before Meta’s new Galactica text generation model regurgitated racist garbage.
I asked it to write about linguistic prejudice.
By Rikker Dockum for Twitter on November 16, 2022
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
By Will Douglas Heaven for MIT Technology Review on November 18, 2022
AI innovation for whom, and at whose expense?
This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.
Continue reading “AI innovation for whom, and at whose expense?”Beware of ‘Effective Altruism’ and ‘Longtermism’
‘Effective Altruism’ is all the vogue, but deeply problematic.
Continue reading “Beware of ‘Effective Altruism’ and ‘Longtermism’”AI rapper FN Meka dropped by Capitol over racial stereotyping
Capitol Music Group faced a backlash for signing the artificial intelligence musician.
From BBC on August 24, 2022
A Primer on AI in/from the Majority World
A resource by Sareeta Amrute, Ranjit Singh, and Rigoberto Lara Guzmán exploring the presence of artificial intelligence and technology in the Majority World. 160 thematic works, available in English and Spanish.
By Ranjit Singh, Rigoberto Lara Guzmán and Sareeta Amrute for Data & Society on September 14, 2022
The Exploited Labor Behind Artificial Intelligence
Supporting transnational worker organizing should be at the center of the fight for “ethical AI.”
By Adrienne Williams, Milagros Miceli and Timnit Gebru for Noema on October 13, 2022
Listen to Sennay Ghebreab for clarity about what AI should and shouldn’t do
Sennay Ghebreab, head of the Civic AI Lab which aims to develop AI in a socially inclusive manner, was interviewed by Kustaw Bessems for the Volkskrant podcast Stuurloos (in Dutch).
Continue reading “Listen to Sennay Ghebreab for clarity about what AI should and shouldn’t do”Defective Altruism
Socialism is the most effective altruism. Who needs anything else? The repugnant philosophy of “Effective Altruism” offers nothing to movements for global justice.
By Nathan J. Robinson for Current Affairs on September 19, 2022
Met kunstmatige intelligentie kun je ook iets goeds doen.
Je kunt al snel denken dat kunstmatige intelligentie alleen maar iets is om voor op te passen. Een machtig wapen in handen van de overheid of van techbedrijven die zich schuldig maken aan privacyschending, discriminatie of onterechte straffen. Maar we kunnen met algoritmen juist problemen oplossen en werken aan een rechtvaardiger wereld, zegt informaticus Sennay Ghebreab van het Civic AI Lab tegen Kustaw Bessems. Dan moeten we wel de basis een beetje snappen én er meer over te zeggen hebben.
By Kustaw Bessems and Sennay Ghebreab for Volkskrant on September 11, 2022
AI-trained robots bring algorithmic biases into robotics
A recent study in robotics has drawn attention from news media such as The Washington Post and VICE. In this study, researchers programmed virtual robots with popular artificial intelligence algorithms. Then, these robots were asked to scan blocks containing pictures of people’s faces and make decisions to put some blocks into a virtual “box” according to an open-ended instruction. In the experiments, researchers quickly found out that these robots repeatedly picked women and people of color to be put in the “box” when they were asked to respond to words such as “criminal”, “homemaker”, and “janitor”. The behaviors of these robots showed that sexist and racist baises coded in AI algorithms have leaked into the field of robotics.
Continue reading “AI-trained robots bring algorithmic biases into robotics”Racist Technology in Action: Turning a Black person, White
An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.
Continue reading “Racist Technology in Action: Turning a Black person, White”Meta Agrees to Alter Ad Technology in Settlement With U.S.
The Justice Department had accused Meta’s housing advertising system of discriminating against Facebook users based on their race, gender, religion and other characteristics.
By Mike Isaac for The New York Times on June 21, 2022
DALL·E mini has a mysterious obsession with women in saris
The images represent a glitch in the system that even its creator can’t explain.
By Nilesh Christopher for Rest of World on June 22, 2022
Forget sentience… the worry is that AI copies human bias
The fuss about a bot’s ‘consciousness’ obscures far more troubling concerns.
By Kenan Malik for The Guardian on June 19, 2022
Racist Techology in Action: Beauty is in the eye of the AI
Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”
Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”AI recognition of patient race in medical imaging: a modelling study
Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person’s race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient’s racial identity from medical images.
By Ananth Reddy Bhimireddy, Ayis T Pyrros, Brandon J. Price, Chima Okechukwu, Haoran Zhang, Hari Trivedi, Imon Banerjee, John L Burns, Judy Wawira Gichoya, Laleh Seyyed-Kalantari, Lauren Oakden-Rayner, Leo Anthony Celi, Li-Ching Chen, Lyle J. Palmer, Marzyeh Ghassemi, Matthew P Lungren, Natalie Dullerud, Ramon Correa, Ryan Wang, Saptarshi Purkayastha, Shih-Cheng Huang Po-Chih Kuo and Zachary Zaiman for The Lancet on May 11, 2022
Don’t miss this 4-part journalism series on ‘AI Colonialism’
The MIT Technology Review has written a four-part series on how the impact of AI is “repeating the patterns of colonial history.” The Review is careful not to directly compare the current situation with the colonialist capturing of land, extraction of resources, and exploitation of people. Yet, they clearly show that AI does further enrich the wealthy at the tremendous expense of the poor.
Continue reading “Don’t miss this 4-part journalism series on ‘AI Colonialism’”Exploitative labour is central to the infrastructure of AI
In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.
Continue reading “Exploitative labour is central to the infrastructure of AI”The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy
A government leader in Argentina hailed the AI, which was fed invasive data about girls. The feminist pushback could inform the future of health tech.
By Alexa Hagerty, Diego Jemio and Florencia Aranda for WIRED on February 16, 2022
Don’t ask if artificial intelligence is good or fair, ask how it shifts power
Those who could be exploited by AI should be shaping its projects.
By Pratyusha Kalluri for Nature on July 7, 2020
Crime Prediction Keeps Society Stuck in the Past
So long as algorithms are trained on racist historical data and outdated values, there will be no opportunities for change.
By Chris Gilliard for WIRED on January 2, 2022
The Humanities Can’t Save Big Tech From Itself
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
For truly ethical AI, its research must be independent from big tech
We must curb the power of Silicon Valley and protect those who speak up about the harms of AI.
By Timnit Gebru for The Guardian on December 6, 2021
