In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.
Benaissa highlighted how the data feeding predictive policing systems are not neutral or objective but always rely on past events and human judgements and, as such, always tell a story. In the following clip she explains (in Dutch) why it is important to not only focus on specific sensitive data such as race or gender, but that the wider context should be taken into account in responding to the question of which data should be omitted to prevent discrimination:
She also spoke vividly on the importance of transparency and accountability in the use of data and algorithmic systems. Not only with policingm but by the entire government. Here, she specifically referred to the proposed EU AI act which does not go far enough in this regard, and called upon the Senate to create human rights impact assessments for any type of algorithms used by the Dutch government.
Besides Nadia’s contribution, the Senate also heard several other experts that spoke on topics closely related to racist technologies. For example, Sennay Ghebreab talked about possible discrimination in the use of algorithms in social security, and Dionne Abdoelhafiezkhan, from IZI solutions and Controle Alt Delete, spoke about ethnic profiling within the police.
See Nadia Benaissa speaking at the Senate, or read her blogpost at Bits of Freedom.