In a roundtable on artificial intelligence in the Dutch Parliament, Quirine Eijkman spoke on behalf of the Netherlands Institute for Human Rights about Robin Pocornie’s case against the discriminatory use of Proctiorio at the VU university.
In the video below, Eijkman explains (in Dutch) why it so difficult for individuals to prove that they are discriminated against by an AI, why the responsiblity and accountability for the outcome of the use of AI systems is with the institute that implements them (not just with the vendor), and that it is very important to be transparent about how the AI works in order that these systems can be contested.
Do also read the (Dutch) position paper by the Institute in which they argue that preventing discrimination often does not get enough attention when developing AI technologies, that (indirect) discrimination often happens without anybody noticing, and that the protection of human rights is at risk with certain implementations of AI.