Parables of AI in/from the Majority World: An Anthology

Encounters with data and AI require contending with the uncertainties of systems that are most often understood through their inputs and outputs. Storytelling is one way to reckon with and make sense of these uncertainties. So what stories can we tell about a world that has increasingly come to rely on AI-based, data-driven interventions to address social problems?

By Patrick Davison, Ranjit Singh and Rigoberto Lara Guzmán for Data & Society on December 7, 2022

We come to bury ChatGPT, not to praise it.

Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in ‘The cat sat on the [BLANK]’). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it’s still a computational guessing game. ChatGPT is, in technical terms, a ‘bullshit generator’.

By Dan McQuillan for Dan McQuillan on February 6, 2023

Word embeddings quantify 100 years of gender and ethnic stereotypes

Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words. We demonstrate that word embeddings can be used as a powerful tool to quantify historical trends and social change. As specific applications, we develop metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Our framework opens up a fruitful intersection between machine learning and quantitative social science.

By Dan Jurafsky, James Zou, Londa Schiebinger and Nikhil Garg for PNAS on April 3, 2018

The cheap, racialised, Kenyan workers making ChatGPT “safe”

Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.

Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””

Quantifying bias in society with ChatGTP-like tools

ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.

Continue reading “Quantifying bias in society with ChatGTP-like tools”

Alliance Against Military AI

Civil society organisations urge the Dutch government to immediately establish a moratorium on developing AI systems in the military domain.

By Oumaima Hajri for Alliantie tegen militaire AI on February 15, 2023

The Costs of Connection – How Data is Colonizing Human Life and Appropriating it for Capitalism

A profound exploration of how the ceaseless extraction of information about our intimate lives is remaking both global markets and our very selves. The Costs of Connection represents an enormous step forward in our collective understanding of capitalism’s current stage, a stage in which the final colonial input is the raw data of human life. Challenging, urgent and bracingly original.

By Nick Couldry and Ulises A. Mejias for Colonized by Data

Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course

Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.

Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”

Better Images of AI

We are a non-profit creating more realistic and inclusive images of artificial intelligence. Visit our growing repository available for anyone to use for free under CC licences, or just to use as inspiration for more helpful and diverse representations of AI.

From Better Images of AI

Racist Technology in Action: AI-generated image tools amplify harmful stereotypes

Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.

Continue reading “Racist Technology in Action: AI-generated image tools amplify harmful stereotypes”

AI innovation for whom, and at whose expense?

This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.

Continue reading “AI innovation for whom, and at whose expense?”

A Primer on AI in/from the Majority World

A resource by Sareeta Amrute, Ranjit Singh, and Rigoberto Lara Guzmán exploring the presence of artificial intelligence and technology in the Majority World. 160 thematic works, available in English and Spanish.

By Ranjit Singh, Rigoberto Lara Guzmán and Sareeta Amrute for Data & Society on September 14, 2022

Defective Altruism

Socialism is the most effective altruism. Who needs anything else? The repugnant philosophy of “Effective Altruism” offers nothing to movements for global justice.

By Nathan J. Robinson for Current Affairs on September 19, 2022

Met kunstmatige intelligentie kun je ook iets goeds doen.

Je kunt al snel denken dat kunstmatige intelligentie alleen maar iets is om voor op te passen. Een machtig wapen in handen van de overheid of van techbedrijven die zich schuldig maken aan privacyschending, discriminatie of onterechte straffen. Maar we kunnen met algoritmen juist problemen oplossen en werken aan een rechtvaardiger wereld, zegt informaticus Sennay Ghebreab van het Civic AI Lab tegen Kustaw Bessems. Dan moeten we wel de basis een beetje snappen én er meer over te zeggen hebben.

By Kustaw Bessems and Sennay Ghebreab for Volkskrant on September 11, 2022

AI-trained robots bring algorithmic biases into robotics

A recent study in robotics has drawn attention from news media such as The Washington Post and VICE. In this study, researchers programmed virtual robots with popular artificial intelligence algorithms. Then, these robots were asked to scan blocks containing pictures of people’s faces and make decisions to put some blocks into a virtual “box” according to an open-ended instruction. In the experiments, researchers quickly found out that these robots repeatedly picked women and people of color to be put in the “box” when they were asked to respond to words such as “criminal”, “homemaker”, and “janitor”. The behaviors of these robots showed that sexist and racist baises coded in AI algorithms have leaked into the field of robotics.

Continue reading “AI-trained robots bring algorithmic biases into robotics”

Racist Technology in Action: Turning a Black person, White

An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.

Continue reading “Racist Technology in Action: Turning a Black person, White”

Racist Techology in Action: Beauty is in the eye of the AI

Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”

Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”

AI recognition of patient race in medical imaging: a modelling study

Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person’s race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient’s racial identity from medical images.

By Ananth Reddy Bhimireddy, Ayis T Pyrros, Brandon J. Price, Chima Okechukwu, Haoran Zhang, Hari Trivedi, Imon Banerjee, John L Burns, Judy Wawira Gichoya, Laleh Seyyed-Kalantari, Lauren Oakden-Rayner, Leo Anthony Celi, Li-Ching Chen, Lyle J. Palmer, Marzyeh Ghassemi, Matthew P Lungren, Natalie Dullerud, Ramon Correa, Ryan Wang, Saptarshi Purkayastha, Shih-Cheng Huang Po-Chih Kuo and Zachary Zaiman for The Lancet on May 11, 2022

Don’t miss this 4-part journalism series on ‘AI Colonialism’

The MIT Technology Review has written a four-part series on how the impact of AI is “repeating the patterns of colonial history.” The Review is careful not to directly compare the current situation with the colonialist capturing of land, extraction of resources, and exploitation of people. Yet, they clearly show that AI does further enrich the wealthy at the tremendous expense of the poor.

Continue reading “Don’t miss this 4-part journalism series on ‘AI Colonialism’”

Exploitative labour is central to the infrastructure of AI

In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.

Continue reading “Exploitative labour is central to the infrastructure of AI”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑