Ben jij straks werkloos door AI?

Het einde van 2022 stond in het teken van de AI-tools. Je maakt digitale kunstwerken met DALL-E, AI-profielfoto’s met Lensa en als klap op de vuurpijl genereer je binnen een paar seconden een hele sollicitatiebrief of essay via ChatGPT. Dat AI, of kunstmatige intelligentie, veel kan wisten we. Maar ChatGPT wordt echt gezien als een doorbraak. Wat is het? En worden wij overbodig door AI? Oh en Devran dacht trouwens lekker ontspannen het nieuwe jaar in te gaan met de chatbot, maar of dat nou zo’n goed idee was…

By Robin Pocornie for YouTube on December 31, 2022

Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)

Dutch student Robin Pocornie filed a complaint with Dutch Institute for Human Rights. The surveillance software that her university used, had trouble recognising her as human being because of her skin colour. After a hearing, the Institute has now ruled that Robin has presented enough evidence to assume that she was indeed discriminated against. The ball is now in the court of the VU (her university) to prove that the software treated everybody the same.

Continue reading “Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)”

Antispieksoftware op de VU discrimineert

Antispieksoftware checkt voorafgaand aan een tentamen of jij wel echt een mens bent. Maar wat als het systeem je niet herkent, omdat je een donkere huidskleur hebt? Dat overkwam student Robin Pocornie, zij stapte naar het College voor de Rechten van de Mens. Samen met Naomi Appelman van het Racism and Technology Centre, die Robin bijstond in haar zaak, vertelt ze erover.

By Naomi Appelman, Natasja Gibbs and Robin Pocornie for NPO Radio 1 on December 12, 2022

Eerste keer vermoeden van algoritmische discriminatie succesvol onderbouwd

Een student is erin geslaagd voldoende feiten aan te dragen voor een vermoeden van algoritmische discriminatie. De vrouw klaagt dat de Vrije Universiteit haar discrimineerde door antispieksoftware in te zetten. Deze software maakt gebruik van gezichtsdetectiealgoritmes. De software detecteerde haar niet als ze moest inloggen voor tentamens. De vrouw vermoedt dat dit komt door haar donkere huidskleur. De universiteit krijgt tien weken de tijd om aan te tonen dat de software niet heeft gediscrimineerd. Dat blijkt uit het tussenoordeel dat het College publiceerde.  

From College voor de Rechten van de Mens on December 9, 2022

Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university

During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.

Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”

Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers

This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.

Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”

Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands

In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.

Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”

Racist Technology in Action: Amazon’s racist facial ‘Rekognition’

An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.

Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”

Seeing infrastructure: race, facial recognition and the politics of data

Facial recognition technology (FRT) has been widely studied and criticized for its racialising impacts and its role in the overpolicing of minoritised communities. However, a key aspect of facial recognition technologies is the dataset of faces used for training and testing. In this article, we situate FRT as an infrastructural assemblage and focus on the history of four facial recognition datasets: the original dataset created by W.W. Bledsoe and his team at the Panoramic Research Institute in 1963; the FERET dataset collected by the Army Research Laboratory in 1995; MEDS-I (2009) and MEDS-II (2011), the datasets containing dead arrestees, curated by the MITRE Corporation; and the Diversity in Faces dataset, created in 2019 by IBM. Through these four exemplary datasets, we suggest that the politics of race in facial recognition are about far more than simply representation, raising questions about the potential side-effects and limitations of efforts to simply ‘de-bias’ data.

By Nikki Stevens and Os Keyes for Taylor & Francis Online on March 26, 2021

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑