This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.
They label, sort and categorise data, and at times, supply data themselves by uploading selfies, or images of objects around them. Their labour is fundamental to the persistent narrative adopted by companies, states, international organisations that AI and automation are innovative and inevitable. The belief is that these AI systems will solve issues around content moderation; and that self-driving cars and facial recognition technologies are innovations to be celebrated.
For all the dogmas of ‘AI ethics’, such as debiasing datasets, fostering transparency and model fairness to make AI “better”, or to ensure that AI is “human-centred”, the elephant in the room is unaddressed: Who is the “human” we are referring to, and which societies do these technological innovations actually benefit?
The article gets to the heart of the matter of AI development: the exploitation of cheap labour by disenfranchised, marginalised, and racialised workers is fundamental to AI development. These AI systems are both cause and effect of unjust labour conditions. Taking ‘AI ethics’ seriously requires stopping the exploitation in the AI industry, by supporting transnational organising and building cross-worker solidarity. Furthermore, we need to shatter misleading imageries and narratives of AI’s “superintelligence” in public discourse. Instead, we should recognise the invisible and exploited global workforce that is necessary to prop up the AI industry.
See: The Exploited Labor Behind Artificial Intelligence at Noema Mag.
Image by Nash Weerasekera for Noema Magazine.