In this eloquent and haunting piece by Hito Steyerl, she weaves the ongoing narratives of the eugenicist history of statistics with its integration into machine learning. She elaborates why the attempts to eliminate bias in facial recognition technology through diversifying datasets obscures the root of the problem: machine learning and automation are fundamentally reliant on extracting and exploiting human labour.
Continue reading “Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem”Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s.
Eight years after a controversy over Black people being mislabeled as gorillas by image analysis software — and despite big advances in computer vision — tech giants still fear repeating the mistake.
By Kashmir Hill and Nico Grant for The New York Times on May 22, 2023
Skin Tone Research @ Google
Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Consensus and subjectivity of skin tone annotation for ML fairness
Skin tone is an observable characteristic that is subjective, perceived differently by individuals (e.g., depending on their location or culture) and thus is complicated to annotate. That said, the ability to reliably and accurately annotate skin tone is highly important in computer vision. This became apparent in 2018, when the Gender Shades study highlighted that computer vision systems struggled to detect people with darker skin tones, and performed particularly poorly for women with darker skin tones. The study highlights the importance for computer researchers and practitioners to evaluate their technologies across the full range of skin tones and at intersections of identities. Beyond evaluating model performance on skin tone, skin tone annotations enable researchers to measure diversity and representation in image retrieval systems, dataset collection, and image generation. For all of these applications, a collection of meaningful and inclusive skin tone annotations is key.
By Candice Schumann and Gbolahan O. Olanubi for Google AI Blog on May 15, 2023
Mean Images
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
‘Thousands of Dollars for Something I Didn’t Do’
Because of a bad facial recognition match and other hidden technology, Randal Reid spent nearly a week in jail, falsely accused of stealing purses in a state he said he had never even visited.
By Kashmir Hill and Ryan Mac for The New York Times on March 31, 2023
Doing an exam as if “driving at night with a car approaching from the other direction with its headlights on full-beam”
Robin Pocornie’s complaint against the VU for their use of Proctorio, which had trouble detecting her face as a person of colour, is part of larger and international story as an article in Wired shows.
Continue reading “Doing an exam as if “driving at night with a car approaching from the other direction with its headlights on full-beam””Watching the watchers: bias and vulnerability in remote proctoring software
Educators are rapidly switching to remote proctoring and examination software for their testing needs, both due to the COVID-19 pandemic and the expanding virtualization of the education sector. State boards are increasingly utilizing these software for high stakes legal and medical licensing exams. Three key concerns arise with the use of these complex software: exam integrity, exam procedural fairness, and exam-taker security and privacy. We conduct the first technical analysis of each of these concerns through a case study of four primary proctoring suites used in U.S. law school and state attorney licensing exams. We reverse engineer these proctoring suites and find that despite promises of high-security, all their anti-cheating measures can be trivially bypassed and can pose significant user security risks. We evaluate current facial recognition classifiers alongside the classifier used by Examplify, the legal exam proctoring suite with the largest market share, to ascertain their accuracy and determine whether faces with certain skin tones are more readily flagged for cheating. Finally, we offer recommendations to improve the integrity and fairness of the remotely proctored exam experience.
By Avi Ginsberg, Ben Burgess, Edward W. Felten and Shaanan Cohney for arXiv.org on May 6, 2022
Assessing variation in human skin tone to inform face recognition system design
International Face Performance Conference (IFPC) 2022
By Yevgeniy B. Sirotin for NIST Pages on November 1, 2022
Racial Discrimination in Face Recognition Technology
The application of face recognition technology in the criminal justice system threatens to perpetuate racial inequality.
By Alex Najibi for Science in the News on October 24, 2020
Enough is Enough. Tell Congress to Ban Federal Use of Face Recognition
Cities and counties across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so. From Boston to San Francisco, Jackson, Mississippi to Minneapolis, elected officials and activists know that face surveillance gives police the power to track us wherever we go. It also disproportionately impacts people of color, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in first amendment protected activities. Even Amazon, known for operating one of the largest video surveillance networks in the history of the world, extended its moratorium on selling face recognition to police.
By Matthew Guariglia for Electronic Frontier Foundation (EFF) on April 4, 2023
This Student Is Taking On ‘Biased’ Exam Software
Mandatory face-recognition tools have repeatedly failed to identify people with darker skin tones. One Dutch student is fighting to end their use.
By Morgan Meaker and Robin Pocornie for WIRED on April 5, 2023
ExamSoft’s proctoring software has a face-detection problem
A professor at Suffolk University Law School shares a bypass to an invasive feature of the ExamSoft testing software, and urges the company to change, in a new report.
By Monica Chin for The Verge on January 6, 2021
First Dutch citizen proves that an algorithm discriminated against her on the basis of her skin colour
Robin Pocornie was featured in the Dutch current affairs programme EenVandaag. Professor Sennay Ghebreab and former Member of Parliament Kees Verhoeven provided expertise and commentary.
Continue reading “First Dutch citizen proves that an algorithm discriminated against her on the basis of her skin colour”Uit Vrij Nederland: Krijgen we wat we verdienen?
Laat ik de vraag specificeren voor mijn vakgebied: zijn besluiten die door technologie worden genomen rechtvaardig? Verdien je de beslissing die uit de machine rolt?
Continue reading “Uit Vrij Nederland: Krijgen we wat we verdienen?”Ben jij straks werkloos door AI?
Het einde van 2022 stond in het teken van de AI-tools. Je maakt digitale kunstwerken met DALL-E, AI-profielfoto’s met Lensa en als klap op de vuurpijl genereer je binnen een paar seconden een hele sollicitatiebrief of essay via ChatGPT. Dat AI, of kunstmatige intelligentie, veel kan wisten we. Maar ChatGPT wordt echt gezien als een doorbraak. Wat is het? En worden wij overbodig door AI? Oh en Devran dacht trouwens lekker ontspannen het nieuwe jaar in te gaan met de chatbot, maar of dat nou zo’n goed idee was…
By Robin Pocornie for YouTube on December 31, 2022
Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament
In a roundtable on artificial intelligence in the Dutch Parliament, Quirine Eijkman spoke on behalf of the Netherlands Institute for Human Rights about Robin Pocornie’s case against the discriminatory use of Proctiorio at the VU university.
Continue reading “Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament”Apple Watch class action alleges device fails to accurately detect blood oxygen levels in people of color
The Apple Watch fails to accurately measure the blood oxygen levels in people of color, according to a class action lawsuit filed Dec. 24 in New York federal court.
By Anne Bucher for Top Class Actions on December 29, 2022
Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)
Dutch student Robin Pocornie filed a complaint with Dutch Institute for Human Rights. The surveillance software that her university used, had trouble recognising her as human being because of her skin colour. After a hearing, the Institute has now ruled that Robin has presented enough evidence to assume that she was indeed discriminated against. The ball is now in the court of the VU (her university) to prove that the software treated everybody the same.
Continue reading “Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)”Antispieksoftware op de VU discrimineert
Antispieksoftware checkt voorafgaand aan een tentamen of jij wel echt een mens bent. Maar wat als het systeem je niet herkent, omdat je een donkere huidskleur hebt? Dat overkwam student Robin Pocornie, zij stapte naar het College voor de Rechten van de Mens. Samen met Naomi Appelman van het Racism and Technology Centre, die Robin bijstond in haar zaak, vertelt ze erover.
By Naomi Appelman, Natasja Gibbs and Robin Pocornie for NPO Radio 1 on December 12, 2022
VU moet bewijzen dat antispieksoftware zwarte studente niet discrimineerde
De Vrije Universiteit Amsterdam (VU) moet aantonen dat haar antispieksoftware een studente niet heeft gediscrimineerd vanwege haar donkere huidskleur. Zij heeft namelijk voldoende aannemelijk gemaakt dat dit wel gebeurde.
By Afran Groenewoud for NU.nl on December 9, 2022
Eerste keer vermoeden van algoritmische discriminatie succesvol onderbouwd
Een student is erin geslaagd voldoende feiten aan te dragen voor een vermoeden van algoritmische discriminatie. De vrouw klaagt dat de Vrije Universiteit haar discrimineerde door antispieksoftware in te zetten. Deze software maakt gebruik van gezichtsdetectiealgoritmes. De software detecteerde haar niet als ze moest inloggen voor tentamens. De vrouw vermoedt dat dit komt door haar donkere huidskleur. De universiteit krijgt tien weken de tijd om aan te tonen dat de software niet heeft gediscrimineerd. Dat blijkt uit het tussenoordeel dat het College publiceerde.
From College voor de Rechten van de Mens on December 9, 2022
Mensenrechtencollege: discriminatie door algoritme voor het eerst ‘aannemelijk’, VU moet tegendeel bewijzen
Het is ‘aannemelijk’ dat het algoritme van antispieksoftware een student aan de Vrije Universiteit (VU) discrimineerde, zegt het College voor de Rechten van de Mens. Het is nu aan de VU om het tegendeel aan te tonen.
By Fleur Damen for Volkskrant on December 9, 2022
Uber’s facial recognition is locking Indian drivers out of their accounts
Some drivers in India are finding their accounts permanently blocked. Better transparency of the AI technology could help gig workers.
By Varsha Bansal for MIT Technology Review on December 6, 2022
Chinese security firm advertises ethnicity recognition technology while facing UK ban
Campaigners concerned that ‘same racist technology used to repress Uyghurs is being marketed in Britain’.
By Alex Hern for The Guardian on December 4, 2022
Surveillance Tech Perpeptuates Police Abuse of Power
Among global movements to reckon with police powers, a new report from UK research group No Tech For Tyrants unveils how police use surveillance technology to abuse power around the world.
From No Tech for Tyrants on November 7, 2022
The unseen Black faces of AI algorithms
Pivotal study of facial recognition algorithms revealed racial bias.
By Abeba Birhane for Nature on October 19, 2022
Buzzy Silicon Valley startup wants to make the world sound whiter
Sanas’ service has already launched in seven call centers. But experts are concerned it could dehumanize workers.
By Joshua Bote for SFGATE on August 22, 2022
Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university
During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.
Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”How AI reinforces racism in Brazil
Author Tarcízio Silva on how algorithmic racism exposes the myth of “racial democracy.”
By Alex González Ormerod and Tarcízio Silva for Rest of World on April 22, 2022
Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users
The startup promises a fairly-distributed, cryptocurrency-based universal basic income. So far all it’s done is build a biometric database from the bodies of the poor.
By Adi Renaldi, Antoaneta Rouss, Eileen Guo and Lujain Alsedeg for MIT Technology Review on April 6, 2022
How our world is designed for the ‘reference man’ and why proctoring should be abolished
We belief that software used for monitoring students during online tests (so-called proctoring software) should be abolished because it discriminates against students with a darker skin colour.
Continue reading “How our world is designed for the ‘reference man’ and why proctoring should be abolished”Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers
This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.
Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”ADCU initiates legal action against Uber’s workplace use of racially discriminatory facial recognition systems
ADCU has launched legal action against Uber over the unfair dismissal of a driver and a courier after the company’s facial recognition system failed to identify them.
By James Farrar, Paul Jennings and Yaseen Aslam for The App Drivers and Couriers Union on October 6, 2021
Brazil’s embrace of facial recognition worries Black communities
Activists say the biometric tools, developed principally around white datasets, risk reinforcing racist practices.
By Charlotte Peet for Rest of World on October 22, 2021
A Detroit community college professor is fighting Silicon Valley’s surveillance machine. People are listening.
Chris Gilliard grew up with racist policing in Detroit. He sees a new form of oppression in the tech we use every day.
By Chris Gilliard and Will Oremus for Washington Post on September 17, 2021
How Stereotyping and Bias Lingers in Product Design
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Why tech needs to focus on the needs of marginalized groups
Marginalized groups are often not represented in technology development. What we need is inclusive participation to centre on the concerns of these groups.
By Nani Jansen Reventlow for The World Economic Forum on July 8, 2021
Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”