Dutch student Robin Pocornie filed a complaint with Dutch Institute for Human Rights. The surveillance software that her university used, had trouble recognising her as human being because of her skin colour. After a hearing, the Institute has now ruled that Robin has presented enough evidence to assume that she was indeed discriminated against. The ball is now in the court of the VU (her university) to prove that the software treated everybody the same.
Continue reading “Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)”In arme wijken voorspelt de overheid nog altijd fraude
De overheid voorspelt na het verbod op het ‘sleepnet’ SyRI nog altijd fraude op adressen in sociaal-economisch zwakkere wijken. Argos en Lighthouse Reports deden onderzoek naar de methode, waarbij gemeenten en instanties als Belastingdienst, UWV en politie risicosignalen delen. ‘Dit gaat over een overheid die zoveel van je afweet dat die altijd iets kan vinden.’
By David Davidson and Saskia Adriaens for VPRO on December 20, 2022
Antispieksoftware op de VU discrimineert
Antispieksoftware checkt voorafgaand aan een tentamen of jij wel echt een mens bent. Maar wat als het systeem je niet herkent, omdat je een donkere huidskleur hebt? Dat overkwam student Robin Pocornie, zij stapte naar het College voor de Rechten van de Mens. Samen met Naomi Appelman van het Racism and Technology Centre, die Robin bijstond in haar zaak, vertelt ze erover.
By Naomi Appelman, Natasja Gibbs and Robin Pocornie for NPO Radio 1 on December 12, 2022
VU moet bewijzen dat antispieksoftware zwarte studente niet discrimineerde
De Vrije Universiteit Amsterdam (VU) moet aantonen dat haar antispieksoftware een studente niet heeft gediscrimineerd vanwege haar donkere huidskleur. Zij heeft namelijk voldoende aannemelijk gemaakt dat dit wel gebeurde.
By Afran Groenewoud for NU.nl on December 9, 2022
Eerste keer vermoeden van algoritmische discriminatie succesvol onderbouwd
Een student is erin geslaagd voldoende feiten aan te dragen voor een vermoeden van algoritmische discriminatie. De vrouw klaagt dat de Vrije Universiteit haar discrimineerde door antispieksoftware in te zetten. Deze software maakt gebruik van gezichtsdetectiealgoritmes. De software detecteerde haar niet als ze moest inloggen voor tentamens. De vrouw vermoedt dat dit komt door haar donkere huidskleur. De universiteit krijgt tien weken de tijd om aan te tonen dat de software niet heeft gediscrimineerd. Dat blijkt uit het tussenoordeel dat het College publiceerde.
From College voor de Rechten van de Mens on December 9, 2022
Mensenrechtencollege: discriminatie door algoritme voor het eerst ‘aannemelijk’, VU moet tegendeel bewijzen
Het is ‘aannemelijk’ dat het algoritme van antispieksoftware een student aan de Vrije Universiteit (VU) discrimineerde, zegt het College voor de Rechten van de Mens. Het is nu aan de VU om het tegendeel aan te tonen.
By Fleur Damen for Volkskrant on December 9, 2022
Amsterdam’s Top400 project stigmatises and over-criminalises youths
A critical, in depth report on Top400 – a crime prevention project by the Amsterdam municipality – which targets and polices minors (between the ages of 12 to 23) has emphasised the stigmatising, discriminatory, and invasive effects of the Top400 on youths and their families.
Continue reading “Amsterdam’s Top400 project stigmatises and over-criminalises youths”Report: How police surveillance tech reinforces abuses of power
The UK organisation No Tech for Tyrants (NT4T) has published an extensive report on the use of surveillance technologies by the police in the UK, US, Mexico, Brazil, Denmark and India, in collaboration with researchers and activists from these countries. The report, titled “Surveillance Tech Perpetuates Police Abuse of Power” examines the relation between policing and technology through in-depth case studies.
Continue reading “Report: How police surveillance tech reinforces abuses of power”Kritiek op Eberhard van der Laans Top 400: ‘Moeders weten nog steeds niet waarom hun zonen op die lijst staan’
In Moeders – donderdag in première op Idfa – hekelt Nirit Peled de Top 400, een lijst met daarop namen van Amsterdamse jongeren die dreigen af te glijden in de serieuze criminaliteit.
By David Hielkema and Nirit Peled for Het Parool on November 9, 2022
The devastating consequences of risk based profiling by the Dutch police
Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”
Continue reading “The devastating consequences of risk based profiling by the Dutch police”Hoogste tijd voor onderzoek institutioneel racisme gemeenten
Er moet een moratorium komen op het gebruik van algoritmes bij risicoprofilering, vindt Samira Rafaela, Europarlementariër van D66.
By Samira Rafaela for Binnenlands Bestuur on October 10, 2022
My sons were profiled by a racist predictive policing system — the AI Act must prohibit these systems
When I found out my sons were placed on lists called the ‘Top 600’ and the ‘Top 400’ by the local Amsterdam council, I thought I was finally getting help. The council says the purpose of these lists, created by predictive and profiling systems, is to identify and give young people who have been in contact with the police “extra attention from the council and organisations such as the police, local public health service and youth protection,” to prevent them from coming into contact with police again. This could not have been further from the truth.
By Diana Sardjoe for Medium on September 28, 2022
Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university
During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.
Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”Student meldt discriminatie met antispieksoftware bij College Rechten van de Mens
Een student van de Vrije Universiteit Amsterdam (VU) dient een klacht in bij het College voor de Rechten van de Mens (pdf). Bij het gebruik van de antispieksoftware voor tentamens werd ze alleen herkend als ze met een lamp in haar gezicht scheen. De VU had volgens haar vooraf moeten controleren of studenten met een zwarte huidskleur even goed herkend zouden worden als witte studenten.
From NU.nl on July 15, 2022
Student stapt naar College voor de Rechten van de Mens vanwege gebruik racistische software door de VU
Student Robin Pocornie moest tijdens de coronapandemie tentamens maken met een lamp direct op haar gezicht. Haar witte medestudenten hoefden dat niet. De surveillance-software van de VU heeft haar gediscrimineerd, daarom dient ze vandaag een klacht in bij het College voor de Rechten van de Mens.
Continue reading “Student stapt naar College voor de Rechten van de Mens vanwege gebruik racistische software door de VU”Shocking report by the Algemene Rekenkamer: state algorithms are a shitshow
The Algemene Rekenkamer (Netherlands Court of Audit) looked into nine different algorithms used by the Dutch state. It found that only three of them fulfilled the most basic of requirements.
Continue reading “Shocking report by the Algemene Rekenkamer: state algorithms are a shitshow”‘Smart’ techologies to detect racist chants at Dutch football matches
The KNVB (Royal Dutch Football Association) is taking a tech approach at tackling racist fan behaviour during matches, an approach that stands a great risk of falling in the techno solutionism trap.
Continue reading “‘Smart’ techologies to detect racist chants at Dutch football matches”Pilot met slimme technologie tegen discriminerende spreekkoren
Met als doel discriminerende spreekkoren in stadions te bestrijden, is een pilot van start gegaan met slimme technologie. Tot nu toe schoten bijvoorbeeld beschikbare videobeelden in combinatie met geluidsopnames te vaak tekort als bewijsmateriaal. In het kader van ‘Ons voetbal is van iedereen’, een gezamenlijk plan van de Rijksoverheid en het voetbal, is het bedrijfsleven uitgedaagd om in samenwerking met Betaald Voetbal Organisaties (BVO) met concrete oplossingen te komen. Met de pilot gaat deze challenge een nieuwe fase in.
From KNVB.nl on June 1, 2022
Diverse algoritmes Rijk voldoen niet aan basisvereisten
Een verantwoorde inzet van algoritmes door uitvoeringsorganisaties van de rijksoverheid is mogelijk, maar in de praktijk niet altijd het geval. De Algemene Rekenkamer heeft bij 3 algoritmes vastgesteld dat deze voldoen aan alle basisvereisten. Bij 6 andere bestaan uiteenlopende risico’s: gebrekkige controle op prestaties of effecten, vooringenomenheid, datalek of ongeautoriseerde toegang.
From Algemene Rekenkamer on May 18, 2022
The Dutch government wants to continue to spy on activists’ social media
Investigative journalism of the NRC brought to light that the Dutch NCTV (the National Coordinator for Counterterrorism and Security) uses fake social media accounts to track Dutch activists. The agency also targets activists working in the social justice or anti-discrimination space and tracks their work, sentiments and movements through their social media accounts. This is a clear example of how digital communication allows governments to intensify their surveillance and criminalisation of political opinions outside the mainstream.
Continue reading “The Dutch government wants to continue to spy on activists’ social media”Racism and technology in the Dutch municipal elections
Last week in the Netherlands all focus was on the municipal elections. Last Wednesday, the city councils were chosen that will govern for the next four years. The elections this year were mainly characterised by a historical low turnout and the traditional overall wins for local parties. However, the focus of the Racism and Technology Center is, of course, on whether the new municipal councils and governments will put issues on the intersection of social justice and technology on the agenda.
Continue reading “Racism and technology in the Dutch municipal elections”Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms
In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.
Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”De discriminatie die in data schuilt
De Eerste Kamer doet onderzoek naar de effectiviteit van wetgeving tegen discriminatie. Wij mochten afgelopen vrijdag de parlementsleden vertellen over discriminatie en algoritmen. Hieronder volgt de kern van ons verhaal.
By Nadia Benaissa for Bits of Freedom on February 8, 2022
Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing
The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), has fined the Dutch Tax Agency 2.75 milion euros for discriminatory data processing as part of the child benefits scandal.
Continue reading “Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing”Boete Belastingdienst voor discriminerende en onrechtmatige werkwijze
De Autoriteit Persoonsgegevens (AP) legt de Belastingdienst een boete op van 2,75 miljoen euro. Dit doet de AP omdat de Belastingdienst jarenlang de (dubbele) nationaliteit van aanvragers van kinderopvangtoeslag op onrechtmatige, discriminerende en daarmee onbehoorlijke wijze heeft verwerkt. Dit zijn ernstige overtredingen van de privacywet, de Algemene verordening gegevensbescherming (AVG).
From Autoriteit Persoonsgegevens on December 7, 2021
Politie koppelde onschuldige asielzoekers aan strafrechtelijke informatie
De politie vergeleek telefoongegevens van asielzoekers met strafrechtelijke informatie. Dat „rijmde” niet met de privacywet, aldus de politie zelf.
By Martin Kuiper and Romy van der Poel for NRC on December 7, 2021
Dutch Scientific Council knows: AI is neither neutral nor always rational
AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.
Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”Amnesty’s grim warning against another ‘Toeslagenaffaire’
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal
Social security enforcement agencies worldwide are increasingly automating their processes in the hope of detecting fraud. The Netherlands is at the forefront of this development. The Dutch tax authorities adopted an algorithmic decision-making system to create risk profiles of individuals applying for childcare benefits in order to detect inaccurate and potentially fraudulent applications at an early stage. Nationality was one of the risk factors used by the tax authorities to assess the risk of inaccuracy and/or fraud in the applications submitted. This report illustrates how the use of individuals’ nationality resulted in discrimination as well as racial profiling.
From Amnesty International on October 25, 2021
Government: Stop using discriminatory algorithms
In her Volkskrant opinion piece Nani Jansen Reventlow makes a forceful argument for the government to stop using algorithms that lead to discrimination and exclusion. Reventlow, director of the Digital Freedom Fund, employs a myriad of examples to show how disregarding the social nature of technological systems can lead to reproducing existing social injustices such as racism or discrimination. The automatic fraud detection system SyRI that was ruled in violation of fundamental rights (and its dangerous successor Super SyRI) is discussed, as well as the racist proctoring software we wrote about earlier.
Continue reading “Government: Stop using discriminatory algorithms”The use of racist technology is not inevitable, but a choice we make
Last month, we wrote a piece in Lilith Mag that builds on some of the examples we have previously highlighted – the Dutch childcare benefits scandal, the use of online proctoring software, and popular dating app Grindr – to underscore two central ideas.
Continue reading “The use of racist technology is not inevitable, but a choice we make”Technology can be racist and we should talk about that
The past year has been filled with examples of technologies being racist. Yet, how we can fight this is hardly part of societal debate in the Netherlands. This must change. Making these racist technologies visible is the first step towards acknowledging that technology can indeed be racist.
Continue reading “Technology can be racist and we should talk about that”Opinie: Stop algoritmen van overheid die tot discriminatie en uitsluiting leiden
Uitvoeringsdiensten gebruiken talloze ‘zwarte lijsten’ met potentiële fraudeurs. Dat kan leiden tot (indirecte) etnische profilering en nieuwe drama’s, na de toeslagenaffaire.
By Nani Jansen Reventlow for Volkskrant on July 15, 2021
Covid-19 data: making racialised inequality in the Netherlands invisible
The CBS, the Dutch national statistics authority, issued a report in March showing that someone’s social economic status is a clear risk factor for dying of Covid-19. In an insightful piece, researchers Linnet Taylor and Tineke Broer criticise this report and show that the way in which the CBS collects and aggragates data on Covid-19 cases and deaths obfuscates the full extent of racialised or ethnic inequality in the impact of the pandemic.
Continue reading “Covid-19 data: making racialised inequality in the Netherlands invisible”Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”Call to the University of Amsterdam: Stop using racist proctoring software
The University of Amsterdam can no longer justify the use of proctoring software for remote examinations now that we know that it has a negative impact on people of colour.
Continue reading “Call to the University of Amsterdam: Stop using racist proctoring software”Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware
De UvA kan het niet meer maken om proctoring in te zetten bij het afnemen van tentamens, nu duidelijk is dat de surveillance-software juist op mensen van kleur een negatieve impact heeft.
Continue reading “Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware”Opinie: ‘UvA, verhul racisme van proctoring niet met mooie woorden’
Surveillancesoftware benadeelt mensen van kleur, blijkt uit onderzoek. Waarom gebruikt de UvA het dan nog, vragen Naomi Appelman, Jill Toh en Hans de Zwart.
By Hans de Zwart, Jill Toh and Naomi Appelman for Het Parool on July 6, 2021
Now you see it, now you don’t: how the Dutch Covid-19 data gap makes ethnic and racialised inequality invisible
All over the world, in the countries hardest hit by Covid-19, there is clear evidence that marginalised groups are suffering the worst impacts of the disease. This plays out differently in different countries: for instance in the US, there are substantial differences in mortality rates by race and ethnicity. Israelis have a substantially lower death rate from Covid-19 than Palestinians in the West Bank or Gaza. In Brazil, being of mixed ancestry is the second most important risk factor, after age, for dying of Covid-19. These racial and ethnic (and related) differences appear also to be present in the Netherlands, but have been effectively rendered politically invisible by the national public health authority’s refusal to report on it.
By Linnet Taylor and Tineke Broer for Global Data Justice on June 17, 2021
Rotterdam’s use of algorithms could lead to ethnic profiling
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”