Why ‘debiasing’ will not solve racist AI

Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.

European Digital Rights (EDRi) commissioned a report, asking academic researchers whether ‘debiasing’ is indeed a feasible way towards a more equitable use of AI. Turns out it isn’t.

In the report, Agathe Balayn and Seda Gürses point out the limitations of ‘debiasing’. Their main concern is that a focus on ‘debiasing’ shifts political problems (of structural discrimination) into a technical domain, which is dominated by the large commercial technology companies.

To enhance the policy debate about AI in the EU, the authors propose four alternative ways of looking at AI:

  • The machine learning view – Some aspects of machine learning are inherently harmful.
  • The production view – the making of AI systems have potential harmful effects that fall outside of the system.
  • The infrastructural view – the computational infrastructure that is needed for AI systems is in the hands of few, creating power imbalances.
  • The organizational view (AI will automate and centralise workflows affecting the structure of the public sector and democracy.

See: If AI is the problem, is debiasing the solution? at European Digital Rights (EDRi).

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑