In an interview with Zoë Corbyn in the Guardian, data journalist and Associate Professor of Journalism, Meredith Broussard discusses her new book More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.
Broussard stresses, alongside many researchers and activists, that bias in AI systems cannot be fixed with better data alone: the change needs to be a societal one. In other words, biases are not glitches in technological systems; they are a feature, not a bug. The term glitch, according to Broussard, suggests something temporary which can be easily fixed, which is simply not the case. She argues that fundamentally, racism, sexism and ableism, are systemic problems that are embedded in our technological systems, because they are ultimately a societal issue. She says,
More data won’t fix our technological systems if the underlying problem is society. […] We can’t fix the algorithms by feeding better data in because there isn’t better data.
In the book, she underscores the real harms that are embedded in technology, building on the work of Sarah Brayne, Charlton McIlwain, Simone Brown, amongst many others. She points to a wide variety of examples of automated systems which have done more harm than good, regardless of intention.
These harms continue to be well-documented in the U.S., in Europe and globally, yet are oftentimes ignored or side-lined. This is further demonstrated in the generative AI discussions recently, whereby the hype of Artificial General Intelligence (AGI), Large Language Models (LLMs) and chatbots have sucked the air out of the room, distracting from the existing and actual harms from algorithmic systems, as rightly called out in the statement by the DAIR Institute. Khadijah Abdurahman has aptly described that perhaps it is less about raising awareness about issues that are in fact already known, but rather observing how the “unknown known” circulates and becomes repressed.
See: AI expert Meredith Broussard: ‘Racism, sexism and ableism are systemic problems’ at The Guardian.