Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures

Black artists have been tinkering with machine learning algorithms in their artistic projects, surfacing many questions about the troubling relationship between AI and race, as reported in the New York Times.

One Senegalese artist, Linda Dounia Rebeiz, used OpenAI’s image generator to imagine buildings of her hometown, Dakar. The algorithm produced ruined buildings and arid desert landscapes, which is nothing like the coastal homes in her Senegalese capital. In other words, the algorithm reflected a cultural image of Africa created by the West, defaulting to racist and colonialist stereotypes. The (in)visibility and misrepresentation of Black people in the context of generative AI is not a new phenomenon, but reflects, and extends a longer history of what Fanon has articulated of the black body as “hyper-visible and invisible at the same time.” Chris Gilliard echoes this, in the context of the debate around racist technologies, as “Blackness being seen and not understood.”

In response to racial bias in these large datasets and models, companies behind AI image generators, such as OpenAI, Stability AI, and Midjourney, have acknowledged bias as a problem. Yet, their superficial “fixes” amount to banning certain words from text prompts, improving bias evaluation techniques, or attempting to “diversify” their datasets. In practice, several other artists have faced censorship of their work as companies have banned certain words such as “slave” from text prompts that users submit to these generators. As one of the Black artists, Stephanie Dinkins, articulates:

What is this technology doing to history? You can see that someone is trying to correct for bias, yet at the same time that erases a piece of history. I find those erasures as dangerous as any bias, because we are just going to forget how we got here. […] Improvements obscure some of the deeper questions we should be asking about discrimination.

Despite the many issues and harms which have been raised around discrimination, racism and AI systems, investments into AI companies continue to pour in. The article highlights how in 2022, almost 3,200 startups received USD$52.1 billion in funding.

A spokeswoman for OpenAI declined to reveal the number of people and amount of money the company has allocated to “fixing” racial bias in their models. What is known, however, is that data theft and worker exploitation are central to models for DALL-E2 and ChatGPT, core to how companies like OpenAI amass their power and wealth.

See: Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History at the New York Times.

Header photo by Flo Ngala for The New York Times from the original article.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑