Racist and classist predictive policing exists in Europe too

The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.

They highlight how these systems have racist and classist undertones; often obscured by the idea that technology and data are “neutral” and “objective”. In the same vein as Sarah Brayne’s research on the LAPD, the article elaborates that the data underlying law enforcement systems are predicated on historical practices and patterns of policing that are racialised and classed. Amnesty revealed, for example, how the UK’s Gangs Matrix, a secretive database of suspected criminals in London used by the Metropolitan Police, contains a disproportionate number of people in the Matrix who are black despite only a small fraction of gang-related crimes being committed by black people.

As argued by the authors, the use of technologies that have discriminatory outcomes – often affecting marginalised communities – are not unintentional. Such uses are the norm and not the exception amongst national governments. The Dutch social welfare benefit scandal and increasing state surveillance in France, are testament to that.

On the EU-level, the current proposal to regulate AI – that will govern predictive policing, biometric mass surveillance and other applications – continues to largely benefit public authorities and private companies rather than people. The prevalence of techno-solutionism across policymaking, combined with a disregard for fundamental rights, continue to be a worrying trend against the backdrop of an increased digitisation of our society.

See: Why EU needs to be wary that AI will increase racial profiling at EUObserver.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑