UK police facial recognition: Another chapter in a long story of racist Technology

The UK police admitted that their facial recognition technology has a significant racial bias.

The Guardian outlines how a report from the National Physical Laboratory shows that the technology incorrectly matches Black people at a rate of 5.5% and Asian people at 4.0%, compared to an incorrect match of just 0.04% for white people. Black women face particularly egregious error rates at 9.9%.

This revelation came just hours after the UK’s policing minister described facial recognition as “the biggest breakthrough since DNA matching” and said the government plans to expand its use in public spaces. A confronting juxtaposition that reveals how eager governments are to deploy these documented racist systems.

The racial bias in facial recognition isn’t news. It is a pattern we have seen again and again, and we have written on it too many times over the past years. Robin Pocornie’s case is a prime example. We litigated her complaint at the Netherlands Institute for Human Rights on precisely this issue: how especially Black women are systematically discriminated against by facial recognition systems.

But here’s the crucial point: even if these systems could be made “accurate” and non-discriminatory, their deployment would still be deeply problematic. The question isn’t just whether the technology works—it’s who it is used on and what purposes it serves.

Look at how U.S. Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) are using facial recognition. Recent reports from 404 Media document agents stopping people on the street, including children on bikes, and scanning their faces to “verify citizenship.”

This isn’t about catching “serious offenders,” as the UK claims its system does. This is racial profiling turbocharged by technology. These agents aren’t randomly stopping people; they’re targeting those who “look” undocumented based on racist assumptions about who belongs in America. The facial recognition simply provides a veneer of technological objectivity and expands what remains fundamentally discriminatory policing.

As one expert told 404 Media, this normalisation of surveillance tactics that would have been unthinkable just years ago represents “pure dystopian creep”.

ICE and CBP’s street-level facial recognition scanning operates as a form of racialised social control, a tool of fascist enforcement that treats certain bodies as inherently suspect and subject to verification.

The question isn’t “how do we make facial recognition work better?” It’s “should we be using this technology at all, and if so, under what strict limitations?” When police and immigration authorities deploy facial recognition, they’re not just using a flawed tool—they’re amplifying existing patterns of racist policing and state control that no amount of technical refinement can fix.

See: ‘Urgent clarity’ sought over racial bias in UK police facial recognition technology at The Guardian.

Image by Comuzi / Mirror B / © BBC / Licenced by CC-BY 4.0.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑