Predictive policing reinforces and accelerates racial bias

The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.

The basis of predictive policing lies in the idea that by feeding historical crime data into a machine learning model, it can predict where future crime will take place, in order for the police to direct their activities to those areas. This is flawed for several reasons. First, there is disparity in crime reporting. Crime victims do not report crime to police at equal rates – White crime victims are less likely to report violent crime to police than Black or Latino victims – and thus reported crime data is inaccurate. Yet, these data are used to generate predictions. Second, these systems assume that all crimes are the same; or that every crime statistic represents a true crime. Combining other types of crime (such as drugs or sex crimes) into a single prediction is distorted. Moreover, these systems are expensive, and neither useful nor effective. The lack of transparency into the use of such systems by law enforcement agencies remains a significant problem, particularly as large sums of public money are spent on experimental technologies that are not externally validated and continue to perpetuate racist logics. This certainly extends beyond the U.S., as highlighted by our own Naomi Appelman in a recent podcast for the Dutch Big Brother Awards.

What it proves, however, is that these systems – neither new nor innovative – are used as a justification for the carceral state, and for the police to keep doing what they are doing, bolstered by the “objectivity” of technologies, in what Cory Doctorow terms as “confirmation bias as a service”. Despite many technology companies associating less with the term “predictive policing” (including PredPol’s rebranding), many continue to sell technologies to the police that perpetuate and exacerbate racist logics, simply marketed differently.

See: Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them at the Markup.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑