AI ‘ethics’ needs to move beyond its narrow idea of ‘representation’ and acknowledge antiblackness

The accessibility and commercial rollout of AI have led to an increase in discriminatory use cases.

We have covered many of these before, including our ongoing concern about the governmental–big tech axis that targets and overpolices racialised communities. Out of arguments over mitigating these severe, technology-mediated violations of human rights emerged the concept of ‘AI ethics’. A set of ground rules meant to make the development and deployment of AI more ‘fair’, ‘responsible’, and ‘unbiased’. A great start, perhaps, but in practice, it does not serve the people most affected. Those working in AI ethics, or ‘responsible AI’, consistently refuse to call it what it is. Not ‘bias’. Antiblackness.

Christopher L. Dancy and P. Khalil Saucier made this case five years ago, centring antiblackness as a structural, ontological problem. This is the very foundation on which AI systems are built and rolled out.

This argument still holds today. The current framing of ‘bias’ is insufficient because antiblackness is not a bug in the system; it is a structural problem that AI learns. Structural racism, which shapes the feedback loop AI systems are trained on, needs to be recognised as the root problem, not the individual bias of a developer. Most damning is that of the 64 AI ethics frameworks and guidelines reviewed at the time, 0% explicitly discussed antiblackness, and only 14% engaged with race at all. Dancy and Saucier also warn that more representation alone is not the answer. Improving facial recognition accuracy for Black people, for example, does not make the system less harmful. In the context of routine oversurveillance, it intensifies it. AI systems cannot be assessed in isolation from the racist context in which they are built. That context feeds directly back into the system.

In the five years since, attempts have been made to correct this colour-blindness. The EU AI Act identifies algorithmic racial bias as a critical risk factor. That is something, but it still approaches the problem through a ‘bias and representation’ framework. Dancy and Saucier argue this is precisely not enough, as the issue is antiblackness as a structural condition, not a technical glitch to be patched.

The hard truth is that the majority of AI ethics still consists of people whose perspective does not fundamentally treat antiblackness as the root of the problem. De-biasing without critical examination does not fix anything. It whitewashes these systems and perpetuates the routinised oversurveillance of Black people. It produces the appearance of progress while the underlying structure stays intact. If you are an AI ethicist who does not place antiblackness at the centre of your work, you are building your career and your credibility on the backs of Black people. And this is antiblack. Let’s stop hiding behind ‘fairness’ and ‘diversity’ and call it what it is, otherwise there is nothing ethical about it.

See: AI and Blackness: Toward Moving Beyond Bias and Representation at IEEE.org.

Image by Clarote & AI4Media / User/Chimera / Licenced by CC-BY 4.0.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑