Dutch Scientific Council knows: AI is neither neutral nor always rational

AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.

How to achieve societal embedding for a technology

The Council demystifies AI, explaining that the technology isn’t neutral nor always rational, and using very similar examples to make that point as our list of racist technologies in action. Their report also addresses the new forms of inequality that come into existence because of AI. They studiously avoid the term racism, but do address “discrimination against people of colour.” The Council is right in stating that the introduction of AI results in questions that transcend the domain of the tech companies. They rightfully see discrimination as a societal problem that requires solutions in the form of access to institutions and a normative debate about what we consider to be acceptable grounds of differentiation.

You can download the full (512 page) Dutch report here, and the attention/time-challenged can check out a more visual one-page summary. There is an English summary too.

See: Opgave AI. De nieuwe systeemtechnologie at the WRR.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑