Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Skin tone is an observable characteristic that is subjective, perceived differently by individuals (e.g., depending on their location or culture) and thus is complicated to annotate. That said, the ability to reliably and accurately annotate skin tone is highly important in computer vision. This became apparent in 2018, when the Gender Shades study highlighted that computer vision systems struggled to detect people with darker skin tones, and performed particularly poorly for women with darker skin tones. The study highlights the importance for computer researchers and practitioners to evaluate their technologies across the full range of skin tones and at intersections of identities. Beyond evaluating model performance on skin tone, skin tone annotations enable researchers to measure diversity and representation in image retrieval systems, dataset collection, and image generation. For all of these applications, a collection of meaningful and inclusive skin tone annotations is key.
By Candice Schumann and Gbolahan O. Olanubi for Google AI Blog on May 15, 2023
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
By Angelina McMillan-Major, Emily M. Bender, Margaret Mitchell and Timnit Gebru for DAIR on March 31, 2023
In this (rightly) scathing article by Andrea Grimes, she denounces the “AI” hype by tech bros perpetuated by much of the mainstream media coverage on large language models, ChatGPT, and GPT4.
Continue reading “AI hype: Unbearably white and male”The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.
By Meredith Whittaker and Wilfred Chan for Fast Company on May 5, 2023
Buskruit was een geweldig slimme uitvinding en kent goede én slechte toepassingen. Zullen we later op dezelfde manier naar kunstmatige intelligentie kijken?
By Claes de Vreese, Hind Dekker-Abdulaziz, Ilyaz Nasrullah, Martijn Bertisen, Nienke Schipper and Oumaima Hajri for Trouw on May 2, 2023
As billions flow into robotics, researchers who conducted the study are concerned about the effects this might have on society.
By Pranshu Verma for Washington Post on July 16, 2022
Tech pundits presume artificial intelligence is something you either conquer or succumb to. But they’re looking at it all wrong.
By Andrea Grimes for Dame Magazine on April 11, 2023
In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “stumbling zombie-like into a digital welfare dystopia.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.
By Joanna Redden for Parental social licence for data linkage for service intervention on October 5, 2022
A report validated Palestinian experiences of social media censorship in May 2021, but missed how those policies are biased by design.
By Marwa Fatafta for +972 Magazine on October 9, 2022
Je kunt tegenwoordig niet meer om AI heen. Of het nu om chatGPT gaat of om de app Lensa AI, wie zich in het digitale veld begeeft komt er vroeg of laat mee in aanraking. De balans opmaken op de vraag ‘is AI goed of slecht?’ is lastig, zeker omdat het nog niet zo wijdverbreid gebruikt wordt. Maar als we de experts mogen geloven, gaat dat in de toekomst anders zijn. De hoogste tijd voor de prijswinnende fotograaf Cigdem Yuksel om te onderzoeken wat het gebruik van AI betekent voor de beeldvorming van moslima’s. Lilith Magazine sprak met Yuksel en met Laurens Vreekamp, schrijver van the Art of AI.
By Aimée Dabekaussen, Cigdem Yuksel and Laurens Vreekamp for Lilith on April 6, 2023
With the high pace development of AI systems, more and more people are trying to grapple with the potential impact of these systems on our societies and daily lives. One often utilized way to make sense of AI is through metaphors, that either help to clarify or horribly muddy the waters.
Continue reading “Metaphors of AI: “Gunpowder of the 21st Century””In this interview with Felienne Hermans, Professor Computer Science at the Vrije Universiteit Amsterdam, she discusses the sore lack of divesity in the white male-dominated world of programming, the importance of teaching people how to code and, the problematic uses of AI-systems.
Continue reading “What problems are AI-systems even solving? “Apparently, too few people ask that question””In this piece on Medium, Jenka Gurfinkel writes about a Reddit user who has asked Midjourney, a generative AI to do the following:
Continue reading “How AIs collapse our history and culture into a monolithic perspective”Imagine a time traveler journeyed to various times and places throughout human history and showed soldiers and warriors of the periods what a “selfie” is.
In an interview with Zoë Corbyn in the Guardian, data journalist and Associate Professor of Journalism, Meredith Broussard discusses her new book More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.
Continue reading “More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’”Fashion brands including Levi’s and Calvin Klein are having custom AI models created to ‘supplement’ representation in size, skin tone and age.
By Alaina Demopoulos for The Guardian on April 3, 2023
How AI misrepresents culture through a facial expression.
By Jenka Gurfinkel for Medium on March 26, 2023
The journalist and academic says the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal.
By Meredith Broussard and Zoë Corbyn for The Guardian on March 26, 2023
Programmeren is een mannending, nog altijd. Hoogleraar computerwetenschappen Felienne Hermans wil daarin verandering brengen. Ondertussen ligt ze ’s nachts wakker van het brede arsenaal aan ellende dat nieuwe AI-toepassingen als ChatGPT teweegbrengen.
By Felienne Hermans and Laurens Verhagen for Volkskrant on March 16, 2023
Deze aflevering staat in het teken van het onderzoek The Public Interest vs. Big Tech. Dit gaat over het gedoe waar maatschappelijke organisaties mee te maken krijgen door de macht van grote techbedrijven over hun communicatie. Inge gaat in gesprek met Evelyn, Lotje en Ramla over dit onderzoek, dat we samen met vier burgerbewegingen en met Pilp (het Public Interest Litigation Project) deden. We bellen in met Oumaima Hajri. Zij heeft, in samenwerking met the Racism and Technology Center, een alliantie opgestart die zich inzet tegen de militarisering van AI.
By Evely Austin, Inge Wannet, Lotje Beek and Oumaima Hajri for Bits of Freedom on March 17, 2023
You are not a parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.
By Elizabeth Weil and Emily M. Bender for New York Magazine on March 1, 2023
This collection by the Data & Society Research Institute sheds an intimate and grounded light on what impact AI-systems can have. The guiding question that connects all of the 13 non-fiction pieces in Parables of AI in/from the Majority world: An Anthology is what stories can be told about a world in which solving societal issues is more and more dependent on AI-based and data-driven technologies? The book, edited by Rigoberto Lara Guzmán, Ranjit Singh and Patrick Davison, through narrating ordinary, everyday experiences in the majority world, slowly disentangles the global and unequally distributed impact of digital technologies.
Continue reading “Stories of everyday life with AI in the global majority”The current wave of reporting on the AI-bubble has one advantage: it also creates a bit of space in the media to write about how AI reflects the existing inequities in our society.
Continue reading “Work related to the Racism and Technology Center is getting media attention”The AI-fueled chatbot gives answers that can seem human-sounding. They may also share humans’ bias.
From CBS News on March 6, 2023
Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.
By Dhruv Mehrotra, Eva Constantaras, Gabriel Geiger, Htet Aung and Justin-Casimir Braun for WIRED on March 6, 2023
Volgens OpenAI en Google kan kunstmatige intelligentie de hele mensheid ten goede komen. Maar uit onderzoek blijkt hoe eenzijdig en beperkt de meeste data zijn waarmee AI is getraind. Volgens onderzoeker Balázs Bodó is dat reden om op de grote rode pauzeknop te drukken.
By Balázs Bodó and Maurits Martijn for De Correspondent on February 22, 2023
Nederland wil graag een voorloper zijn in het gebruik van kunstmatige intelligentie in militaire situaties. Deze technologie kan echter leiden tot racisme en discriminatie. In een open brief roepen critici op tot een moratorium op het gebruik van kunstmatige intelligentie. Initiatiefnemer Oumaima Hajri legt uit waarom.
By Oumaima Hajri for De Kanttekening on February 22, 2023
Encounters with data and AI require contending with the uncertainties of systems that are most often understood through their inputs and outputs. Storytelling is one way to reckon with and make sense of these uncertainties. So what stories can we tell about a world that has increasingly come to rely on AI-based, data-driven interventions to address social problems?
By Patrick Davison, Ranjit Singh and Rigoberto Lara Guzmán for Data & Society on December 7, 2022
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in ‘The cat sat on the [BLANK]’). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it’s still a computational guessing game. ChatGPT is, in technical terms, a ‘bullshit generator’.
By Dan McQuillan for Dan McQuillan on February 6, 2023
Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words. We demonstrate that word embeddings can be used as a powerful tool to quantify historical trends and social change. As specific applications, we develop metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Our framework opens up a fruitful intersection between machine learning and quantitative social science.
By Dan Jurafsky, James Zou, Londa Schiebinger and Nikhil Garg for PNAS on April 3, 2018
The past week the Dutch goverment hosted and organised the military AI conference REAIM 2023. Together with eight other NGOs we signed an open letter, initated by Oumaima Hajri, that calls on the Dutch government to stop promoting narratives of “innovation” and “opportunities” but, rather, centre the very real and often disparate human impact.
Continue reading “An alliance against military AI”Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.
Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.
Continue reading “Quantifying bias in society with ChatGTP-like tools”Civil society organisations urge the Dutch government to immediately establish a moratorium on developing AI systems in the military domain.
By Oumaima Hajri for Alliantie tegen militaire AI on February 15, 2023
A profound exploration of how the ceaseless extraction of information about our intimate lives is remaking both global markets and our very selves. The Costs of Connection represents an enormous step forward in our collective understanding of capitalism’s current stage, a stage in which the final colonial input is the raw data of human life. Challenging, urgent and bracingly original.
By Nick Couldry and Ulises A. Mejias for Colonized by Data
Je zult de populaire chatbot ChatGPT niet snel betrappen op vieze woordjes of racistische taal. Hij is keurig getraind door tientallen Kenianen. Hun taak: het algoritme leren vooral niet te beginnen over moord, marteling en verkrachting, zodat wij – de gebruikers – geen smerige drek voorgeschoteld krijgen.
By Maurits Martijn for De Correspondent on January 28, 2023
OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”Unsurprisingly, the artistic and ethical shortcomings of AI image generators are tied to their dependence on capital and capitalism.
By Marco Donnarumma for Hyperallergic on October 24, 2022
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.