The DUO discrimination scandal, where more than 10.000 students were discriminated against, has led to multiple initiatives that aim to prevent this from happening again. None of these addresses the core problems of “predictive optimisation”.
The CPB (Netherlands Bureau for Economic Policy Analysis), for example, has published a Selectivity Scan that allows organisations that develop algorithms that profile or select people to assess whether their algorithms are discriminatory across different groups, without needing access to personal data.
The Dutch standards-setting organisation, NEN, is facilitating a group of experts developing an NTA (a Dutch Technical Agreement) about how to create responsible risk modelling algorithms and mitigate their risks.
Both approaches take a very technical perspective on discrimination and implicitly assume that indirect discrimination can be legitimate if adequately justified. Neither seems to address the problems inherent in “predictive optimisation”: the use of machine learning applications to predict individuals’ futures and then make decisions based on those predictions (as in the DUO case).
In Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy, Angelina Wang, Sayash Kapoor, Solon Barocas, and Arvind Narayanan highlight seven flaws of these algorithms. For example, the fact that good predictions may not lead to good decisions, that social outcomes aren’t accurately predictable, and that the training rarely matches the deployment setting. They write (emphasis theirs):
Any application of predictive optimization should be considered illegitimate by default unless the developer justifies how it avoids these flaws.
The Staatscommissie tegen Discriminatie en Racisme (State Commission against Discrimination and Racism) has recently also created a “Discrimination Test”. Their approach is less technical and more fundamental, and has the explicit goal to stop further institutional discrimination. Rather than doing some statistical tests, this test requires an organisation to go through a two to four-month process with critical self-reflection. The Commission writes:
Completing the Discrimination Test does not in itself guarantee that work processes cannot lead to discriminatory policy. The more time and energy invested in critical self-examination, the greater the likelihood that the test will yield meaningful results.
The process ends with an action plan to minimise the risk of (institutional) discrimination. Hopefully, these action plans will eliminate predictive optimisation applied to individuals.
See: Against Predictive Optimization at Princeton Computer Science, and Voorkom discriminatie: ga van start met de Discriminatietoets at Staatscommissie tegen Discriminatie en Racisme.
Image from the Staatscommissie tegen Discriminatie en Racisme site.
