Racist Technology in Action: Predicting future criminals with a bias against Black people

In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.

ProPublica found that “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” and that COMPAS “makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.” Even though the risk assessment form doesn’t have direct questions about ethnicity or race, it does contain a lot of questions that can serve as a proxy for race. Questions about poverty, for example, or about the prevalence of crime in your neighbourhood.

ProPublica’s research has kick-started a whole academic field that is trying to increase the fairness (and accountability and transparency) in machine learning, the technology that is used for these kinds of systems. From this field, we now know that these systems can never be fair in the all the ways that we would intuitively understand fairness. This is another argument for why we should never forget that making systems more ‘fair’, doesn’t necessarily fix the injustices in society.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑