Human-in-the-loop is not the magic bullet to fix AI harms

In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.

The irony lies in the belief held by many in the ‘promises’ of AI and algorithmic decision-making as the opportunity to improve upon human biases and cognitive limits. Yet, humans are now presented as the fix to high-stakes decision-making. While not entirely dismissing the idea of including a human-in-the-loop, it is important to stress that we humans evidently suffer from our own biases, and discriminatory and racist behaviours too. Additionally, those in power can simply shift or obfuscate their blame to frontline human operators of AI systems. We should neither pin, nor limit our imaginations, to a superficial human-in-the-loop fix. The tangible and material harms of AI should make us consider whether these algorithmic systems ought to be used at all in certain scenarios, and demand stronger accountability for the human decision-makers creating these harms, whether intentional or otherwise.

See: The False Comfort of Human Oversight as an Antidote to A.I. Harm in Slate.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑