Robin Pocornie was featured in the Dutch current affairs programme EenVandaag. Professor Sennay Ghebreab and former Member of Parliament Kees Verhoeven provided expertise and commentary.
Continue reading “First Dutch citizen proves that an algorithm discriminated against her on the basis of her skin colour”Uit Vrij Nederland: Krijgen we wat we verdienen?
Laat ik de vraag specificeren voor mijn vakgebied: zijn besluiten die door technologie worden genomen rechtvaardig? Verdien je de beslissing die uit de machine rolt?
Continue reading “Uit Vrij Nederland: Krijgen we wat we verdienen?”Ben jij straks werkloos door AI?
Het einde van 2022 stond in het teken van de AI-tools. Je maakt digitale kunstwerken met DALL-E, AI-profielfoto’s met Lensa en als klap op de vuurpijl genereer je binnen een paar seconden een hele sollicitatiebrief of essay via ChatGPT. Dat AI, of kunstmatige intelligentie, veel kan wisten we. Maar ChatGPT wordt echt gezien als een doorbraak. Wat is het? En worden wij overbodig door AI? Oh en Devran dacht trouwens lekker ontspannen het nieuwe jaar in te gaan met de chatbot, maar of dat nou zo’n goed idee was…
By Robin Pocornie for YouTube on December 31, 2022
Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament
In a roundtable on artificial intelligence in the Dutch Parliament, Quirine Eijkman spoke on behalf of the Netherlands Institute for Human Rights about Robin Pocornie’s case against the discriminatory use of Proctiorio at the VU university.
Continue reading “Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament”Apple Watch class action alleges device fails to accurately detect blood oxygen levels in people of color
The Apple Watch fails to accurately measure the blood oxygen levels in people of color, according to a class action lawsuit filed Dec. 24 in New York federal court.
By Anne Bucher for Top Class Actions on December 29, 2022
Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)
Dutch student Robin Pocornie filed a complaint with Dutch Institute for Human Rights. The surveillance software that her university used, had trouble recognising her as human being because of her skin colour. After a hearing, the Institute has now ruled that Robin has presented enough evidence to assume that she was indeed discriminated against. The ball is now in the court of the VU (her university) to prove that the software treated everybody the same.
Continue reading “Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)”Antispieksoftware op de VU discrimineert
Antispieksoftware checkt voorafgaand aan een tentamen of jij wel echt een mens bent. Maar wat als het systeem je niet herkent, omdat je een donkere huidskleur hebt? Dat overkwam student Robin Pocornie, zij stapte naar het College voor de Rechten van de Mens. Samen met Naomi Appelman van het Racism and Technology Centre, die Robin bijstond in haar zaak, vertelt ze erover.
By Naomi Appelman, Natasja Gibbs and Robin Pocornie for NPO Radio 1 on December 12, 2022
VU moet bewijzen dat antispieksoftware zwarte studente niet discrimineerde
De Vrije Universiteit Amsterdam (VU) moet aantonen dat haar antispieksoftware een studente niet heeft gediscrimineerd vanwege haar donkere huidskleur. Zij heeft namelijk voldoende aannemelijk gemaakt dat dit wel gebeurde.
By Afran Groenewoud for NU.nl on December 9, 2022
Eerste keer vermoeden van algoritmische discriminatie succesvol onderbouwd
Een student is erin geslaagd voldoende feiten aan te dragen voor een vermoeden van algoritmische discriminatie. De vrouw klaagt dat de Vrije Universiteit haar discrimineerde door antispieksoftware in te zetten. Deze software maakt gebruik van gezichtsdetectiealgoritmes. De software detecteerde haar niet als ze moest inloggen voor tentamens. De vrouw vermoedt dat dit komt door haar donkere huidskleur. De universiteit krijgt tien weken de tijd om aan te tonen dat de software niet heeft gediscrimineerd. Dat blijkt uit het tussenoordeel dat het College publiceerde.
From College voor de Rechten van de Mens on December 9, 2022
Mensenrechtencollege: discriminatie door algoritme voor het eerst ‘aannemelijk’, VU moet tegendeel bewijzen
Het is ‘aannemelijk’ dat het algoritme van antispieksoftware een student aan de Vrije Universiteit (VU) discrimineerde, zegt het College voor de Rechten van de Mens. Het is nu aan de VU om het tegendeel aan te tonen.
By Fleur Damen for Volkskrant on December 9, 2022
Uber’s facial recognition is locking Indian drivers out of their accounts
Some drivers in India are finding their accounts permanently blocked. Better transparency of the AI technology could help gig workers.
By Varsha Bansal for MIT Technology Review on December 6, 2022
Chinese security firm advertises ethnicity recognition technology while facing UK ban
Campaigners concerned that ‘same racist technology used to repress Uyghurs is being marketed in Britain’.
By Alex Hern for The Guardian on December 4, 2022
Surveillance Tech Perpeptuates Police Abuse of Power
Among global movements to reckon with police powers, a new report from UK research group No Tech For Tyrants unveils how police use surveillance technology to abuse power around the world.
From No Tech for Tyrants on November 7, 2022
The unseen Black faces of AI algorithms
Pivotal study of facial recognition algorithms revealed racial bias.
By Abeba Birhane for Nature on October 19, 2022
Buzzy Silicon Valley startup wants to make the world sound whiter
Sanas’ service has already launched in seven call centers. But experts are concerned it could dehumanize workers.
By Joshua Bote for SFGATE on August 22, 2022
Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university
During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.
Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”How AI reinforces racism in Brazil
Author Tarcízio Silva on how algorithmic racism exposes the myth of “racial democracy.”
By Alex González Ormerod and Tarcízio Silva for Rest of World on April 22, 2022
Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users
The startup promises a fairly-distributed, cryptocurrency-based universal basic income. So far all it’s done is build a biometric database from the bodies of the poor.
By Adi Renaldi, Antoaneta Rouss, Eileen Guo and Lujain Alsedeg for MIT Technology Review on April 6, 2022
How our world is designed for the ‘reference man’ and why proctoring should be abolished
We belief that software used for monitoring students during online tests (so-called proctoring software) should be abolished because it discriminates against students with a darker skin colour.
Continue reading “How our world is designed for the ‘reference man’ and why proctoring should be abolished”Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers
This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.
Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”ADCU initiates legal action against Uber’s workplace use of racially discriminatory facial recognition systems
ADCU has launched legal action against Uber over the unfair dismissal of a driver and a courier after the company’s facial recognition system failed to identify them.
By James Farrar, Paul Jennings and Yaseen Aslam for The App Drivers and Couriers Union on October 6, 2021
Brazil’s embrace of facial recognition worries Black communities
Activists say the biometric tools, developed principally around white datasets, risk reinforcing racist practices.
By Charlotte Peet for Rest of World on October 22, 2021
A Detroit community college professor is fighting Silicon Valley’s surveillance machine. People are listening.
Chris Gilliard grew up with racist policing in Detroit. He sees a new form of oppression in the tech we use every day.
By Chris Gilliard and Will Oremus for Washington Post on September 17, 2021
How Stereotyping and Bias Lingers in Product Design
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Why tech needs to focus on the needs of marginalized groups
Marginalized groups are often not represented in technology development. What we need is inclusive participation to centre on the concerns of these groups.
By Nani Jansen Reventlow for The World Economic Forum on July 8, 2021
Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”Call to the University of Amsterdam: Stop using racist proctoring software
The University of Amsterdam can no longer justify the use of proctoring software for remote examinations now that we know that it has a negative impact on people of colour.
Continue reading “Call to the University of Amsterdam: Stop using racist proctoring software”Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware
De UvA kan het niet meer maken om proctoring in te zetten bij het afnemen van tentamens, nu duidelijk is dat de surveillance-software juist op mensen van kleur een negatieve impact heeft.
Continue reading “Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware”Opinie: ‘UvA, verhul racisme van proctoring niet met mooie woorden’
Surveillancesoftware benadeelt mensen van kleur, blijkt uit onderzoek. Waarom gebruikt de UvA het dan nog, vragen Naomi Appelman, Jill Toh en Hans de Zwart.
By Hans de Zwart, Jill Toh and Naomi Appelman for Het Parool on July 6, 2021
Reinforce rights, not racism: Why we must fight biometric mass surveillance in Europe
Gwendoline Delbos-Corfield MEP in conversation with Laurence Meyer, from the Digital Freedom Fund, about the dangers of the increasing use of biometric mass surveillance – both within the EU and outside it, as well as the impact it can have on the lives of people who are already being discriminated against.
By Gwendoline Delbos-Corfield and Laurence Meyer for Greens/EFA on June 24, 2021
Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match
A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition.
By Kashmir Hill for The New York Times on December 29, 2020
Inside the fight to reclaim AI from Big Tech’s control
For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
Fotoautomat funktioniert bei Schwarzen nicht
Ein Fotoautomat des Hamburger Landesbetriebs Verkehr erkennt offenbar keine Schwarzen. Eine Hamburgerin konnte darum im Dezember keinen internationalen Führerschein beantragen.
From NDR.de on July 25, 2020
How normal am I?
Experience the world of face detection algorithms in this freaky test.
By Tijmen Schep for How Normal Am I
Twitter rolls out bigger images and cropping control on iOS and Android
Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.
By Taylor Hatmaker for TechCrunch on May 6, 2021
EU’s new AI law risks enabling Orwellian surveillance states
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
Racist Technology in Action: Amazon’s racist facial ‘Rekognition’
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.
Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”Proctorio Is Using Racist Algorithms to Detect Faces
A student researcher has reverse-engineered the controversial exam software—and discovered a tool infamous for failing to recognize non-white faces.
By Todd Feathers for VICE on April 8, 2021
Seeing infrastructure: race, facial recognition and the politics of data
Facial recognition technology (FRT) has been widely studied and criticized for its racialising impacts and its role in the overpolicing of minoritised communities. However, a key aspect of facial recognition technologies is the dataset of faces used for training and testing. In this article, we situate FRT as an infrastructural assemblage and focus on the history of four facial recognition datasets: the original dataset created by W.W. Bledsoe and his team at the Panoramic Research Institute in 1963; the FERET dataset collected by the Army Research Laboratory in 1995; MEDS-I (2009) and MEDS-II (2011), the datasets containing dead arrestees, curated by the MITRE Corporation; and the Diversity in Faces dataset, created in 2019 by IBM. Through these four exemplary datasets, we suggest that the politics of race in facial recognition are about far more than simply representation, raising questions about the potential side-effects and limitations of efforts to simply ‘de-bias’ data.
By Nikki Stevens and Os Keyes for Taylor & Francis Online on March 26, 2021