Algorithm Watch experimented with three major generative AI tools, generating 8,700 images of politicians. They found that all these tools make an active effort to lessen bias, but that the way they attempt to do this is problematic.
The tools seem to attach a nationality to people based on their name alone. They also depicted women as wearing a headscarf based on certain names. And a Russian-sounding name led to a picture with the spitting image of Lenin.
Through using the API, Algorithm Watch has found out how a tool like DALL·E rewrites prompts in the background to increase the diversity of its output. If you type in “Martin Häusling is having dinner with staff”, then DALL·E will turn that into the following prompt in the background (without displaying this in the user interface):
A middle-aged Caucasian male, clad in business casual attire, is sitting at a large, well-set dining table. He is engrossed in lively conversation with a diverse group of individuals, presumably his team members. There is a Middle-Eastern woman, a Hispanic man, a Black woman and a South Asian man. They all are sitting around the table, partaking in the meal and engaging in discussion. The atmosphere is convivial and warm, reflecting a positive office culture.
See: Image generators are trying to hide their biases – and they make them worse at Algorithm Watch.