A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.
As the algorithm automatically put the white faces on an image front and centre, black, brown or Asian faces were hidden in the cropped versions of the pictures. The racism accidentally came to light after tweets from student Colin Madland went viral in September 2020. Madland, a white man, posted several pictures of himself and a black colleague with the black colleague consistently being cropped out of the preview. With his tweets going viral, many people replicated the same racist result with notably even cartoon characters not escaping the racist treatment. Most popular were pictures of Mitch McConnell and Barack Obama where Obama’s image was consistently erased.
Responding to the controversy, Twitter emphasized that it ‘tested for bias’ before implementing the system and vowed to address the apparent ‘bias’ in its algorithm. Possible explanations given were the models reliance on contrast in its learning patterns. However, Twitter struggled to explain how the model was trained and currently functions and, as of last December, the algorithm still seemed to display the same racist behaviour.
Update (May 14, 2021): Twitter has changed its cropping algorithm, now leaving many more images uncropped. Although a step forward, this obviously does not properly address the core of the problem.