Last month, we wrote a piece in Lilith Mag that builds on some of the examples we have previously highlighted – the Dutch childcare benefits scandal, the use of online proctoring software, and popular dating app Grindr – to underscore two central ideas.
First, technology itself can be racist. In online proctoring that has been used in education, dark-skinned students had to shine a light on themselves to be verified to take an exam as the Proctorio software was unable to detect them otherwise. Facial recognition systems have been proven to have high error rates when used to identify people of colour, as we have previously flagged. Yet, these systems are still in place. The point we want to drive across is that it is not enough to say that these technologies produce racist outcomes. Rather, some technologies themselves explicitly reproduce and exacerbate existing racist patterns within society and should simply be banned.
Second, regardless of intent and outcome, the creation and use of such racist technologies are always choices that are actively made by people in a position of power. Therefore, these individuals have to be held responsible for the tangible harms that have, and can be, inflicted onto to individuals and communities. Despite the prevailing notion that technology can solve bias and racism, let us not forget that institutional and structural racism is rife in our societies, with or without technology-use. We need to demand from our governments, employers, and communities to be responsible, rather than having them use technology as a veneer or solution to existing racist practices and behaviours.
See: Technology can be racist and we should talk about that at Lilith.