The current wave of reporting on the AI-bubble has one advantage: it also creates a bit of space in the media to write about how AI reflects the existing inequities in our society.
EenVandaag featured Robin Pocornie talking (in Dutch) about her complaint against the VU’s use of Proctorio.
Robin is the first Dutch citizen to prove that an algorithm discriminated against her on the basis of her skin colour. Experts Sennay Ghebreab and Kees Verhoeven demystify AI and keep telling us that it is humans making these systems about humans, using the data of humans, with an impact on humans. So it is humans that should be held responsible for the impact of these systems.
See: College voor de Rechten van de Mens roept mensen op om discriminerende algoritmes te melden at EenVandaag.
With the support of the Racism and Technology Center, Oumaima Hajri has organised an alliance against the use of AI in the military domain. In a Dutch opinion piece in De Kanttekening, Oumaima explains how the Dutch government – with it enthusiasm for the use of ‘responsible’ AI in the military – has chosen a deeply naive techno-solutionist path, and seems to be lacking the reflective ability to learn from the mistakes it has made in the past.
See: Kunstmatige intelligentie moet in de pas marcheren van mensenrechten at De Kanttekening.