Racist Technology in Action: Generative/ing AI Bias

By now we know that generative image AI reproduces and amplifies sexism, racism, and other social systems of oppression. The latest example is of AI-generated stickers in WhatsApp that systematically depict Palestinian men and boys with rifles and guns.

A prompt with the text ‘Muslim boy palestinian’ showing two results: a young boy in a uniform with a Palestinian flag, and a young boy in a uniform wielding a machine gun.

When prompting the system on Israel, not even ‘Israeli army’ as a prompt produces an image with guns.

The cause for this ‘bias’ or stereotyping is that generative AI systems reproduce the problematic data they are fed (read more here and here).

Clearly, this depiction of Palestinians as gunwielding is a reflection of the discriminatory and most likely Islamophobic content that is the basis of WhatsApp’s generative AI. Such violent depictions, in a small way, contribute to the continuous dehumainsation of Palestinians and function to justify the ongoing genocide. Moreover, that only men and boys are depicted as violent reinforces the incredibly harmful narratives of women and childern as solely helpless and men as killable targets.

Meta (WhatsApp’s but also Facebook’s and Instagram’s parent company) has received stark criticism ever since the current seige started in October for suppressing and over-policing content supporting Palestine. This discriminatory content moderation, silencing palestinian voices and perspectives is, however, not new. Last year an independent report commissioned by Meta after it was accused of biased content moderation concluded that:

Meta’s actions in May 2021 appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.

These seemingly innocent, brightly coloured stickers are just the latest instance of both a long chain of discrimination in generative AI as well as suppression of Palestinian perspectives by Meta.

See: WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’ at The Guardian.

Image via WhatsApp from the original Guardian article.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑