Two engineering students, Caine Ardayfio and AnnPhu Nguyen, at Harvard University developed real-time facial recognition glasses. They went testing it out on passengers in the Boston subway, and easily identified a former journalist and some of his articles. A great way to produce small-talk conversations of break the ice – you might think.
Continue reading “Easily developed facial recognition glasses outline how underprepared we are for privacy violations”Face Detection, Remote Testing Software & Learning At Home While Black — Amaya’s Flashlight
Remote learning software, like most software, can be biased. Here’s what happened one student, Amaya, used a test proctoring app to take her lab quiz.
From YouTube on February 7, 2022
How governments are using facial recognition to crack down on protesters
Mass protests used to offer a degree of safety in numbers. Facial recognition technology changes the equation.
By Darren Loucaides for Rest of World on March 27, 2024
Facial Recognition Led to Wrongful Arrests. So Detroit Is Making Changes.
The Detroit Police Department arrested three people after bad facial recognition matches, a national record. But it’s adopting new policies that even the A.C.L.U. endorses.
By Kashmir Hill for The New York Times on June 29, 2024
Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts
In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.
Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”These Wrongly Arrested Black Men Say a California Bill Would Let Police Misuse Face Recognition
Three men falsely arrested based on face recognition technology have joined the fight against a California bill that aims to place guardrails around police use of the technology. They say it will still allow abuses and misguided arrests.
By Khari Johnson for The Markup on June 12, 2024
‘I was misidentified as shoplifter by facial recognition tech’
Live facial recognition is becoming increasingly common on UK high streets. Should we be worried?
By James Clayton for BBC on May 25, 2024
So, Amazon’s ‘AI-powered’ cashier-free shops use a lot of … humans. Here’s why that shouldn’t surprise you
This is how these bosses get rich: by hiding underpaid, unrecognised human work behind the trappings of technology, says the writer and artist James Bridle.
By James Bridle for The Guardian on April 10, 2024
Voorvechters aan het woord: Robin Pocornie
Tijdens de Big Brother Awards was er dit jaar extra aandacht voor de positieve impact van voorvechters van onze internetvrijheid. De zogenoemde Felipe Rodriquez Award – naar één van de oprichters van XS4ALL en grondlegger van de digitale burgerrechtenbeweging in Nederland – ging dit jaar naar maar liefst vijf winnaars. Met deze prijs willen we anderen inspireren en motiveren om samen werk te maken van onze digitale rechten. We lichten dan ook graag de winnaars één voor één voor je uit in deze interviewreeks. Deze keer aan het woord: Robin Pocornie. Zij won de prijs voor het aankaarten van racistische anti-spieksoftware op de Vrije Universiteit van Amsterdam.
By Lotje Beek, Lotte Houwing, and Robin Pocornie for Bits of Freedom on March 5, 2024
Robin Aisha Pocornie’s TEDx talk: “Error 404: Human Face Not Found”
Robin Aisha Pocornie’s case should by now be familiar for regular readers of our Center’s work. Robin has now told this story in her own voice at TEDxAmsterdam.
Continue reading “Robin Aisha Pocornie’s TEDx talk: “Error 404: Human Face Not Found””Dutch Higher Education continues to use inequitable proctoring software
In October last year, RTL news showed that Proctorio’s software, used to check if students aren’t cheating during online exams, works less for students of colour. Five months later, RTL asked the twelve Dutch educational institutions on Proctorio’s client list whether they were still using the tool. Eight say they still do.
Continue reading “Dutch Higher Education continues to use inequitable proctoring software”De Netwerkmaatschappij, deel 48: Joy Buolamwini
Ik zal er geen doekjes om winden: ik ben fan van deze vrouw. Daarom draagt deze blog dan ook haar naam. Hieronder gaat het vooral over een boek dat ze schreef: Unmasking AI, my mission to protect what is human in a world of machines. Ik bewonder haar vooral omdat ze zich, terwijl ze alle mogelijkheden had voor een grootse wetenschappelijke carrière, toch voortdurend ook bekommerde om slachtoffers van gezichtsherkenning: de door haar onderzochte digitale technologie.
By Roeland Smeets for Netwerk Mediawijsheid on January 30, 2024
Vrije Universiteit vrijgepleit van discriminatie met systeem voor gezichtsherkenning
Ophef: Veel problemen ontstonden volgens de VU niet door de gezichtsherkenning, maar door een haperende verbinding.
By Sjoerd de Jong for NRC on January 10, 2024
Late Night Talks: Studenten slepen universiteit voor de rechter vanwege discriminerende AI-software
Vrije Universiteit Amsterdam student Robin Pocornie en Naomi Appelman, co-founder van non-profitorganisatie Racism and Technology Center, gaan met elkaar in gesprek over discriminatie binnen kunstmatige intelligentie (artificial intelligence). Wat zijn de voor- en nadelen van kunstmatige intelligentie en in hoeverre hebben we grip en hoe kunnen we discriminatie tegengaan in de snelle ontwikkelingen van technologie?
By Charisa Chotoe, Naomi Appelman and Robin Pocornie for YouTube on December 3, 2023
Automating apartheid in the Occupied Palestinian Territories
In this interview, Matt Mahmoudi explains the Amnesty report titled Automating Apartheid, which he contributed to. The report exposes how the Israeli authorities extensively use surveillance tools, facial recognition technologies, and networks of CCTV cameras to support, intensify and entrench their continued domination and oppression of Palestinians in the Occupied Territories (OPT), Hebron and East Jerusalem. Facial recognition software is used by Israeli authorities to consolidate existing practices of discriminatory policing and segregation, violating Palestinians’ basic rights.
Continue reading “Automating apartheid in the Occupied Palestinian Territories”‘Unmasking AI’ and the Fight for Algorithmic Justice
A conversation with Dr. Joy Buolamwini.
By Joy Buolamwini and Nabiha Syed for The Markup on November 18, 2023
Joy Buolamwini: “We’re giving AI companies a free pass”
The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems.
By Joy Buolamwini and Melissa Heikkilä for MIT Technology Review on October 29, 2023
Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination
On October 17th, the Netherlands Institute for Human Rights ruled that the VU did not discriminate against bioinformatics student Robin Pocornie on the basis of race by using anti-cheating software. However, according to the institute, the VU has discriminated on the grounds of race in how they handled her complaint.
Continue reading “Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination”Waarom we zwarte vrouwen meer zouden moeten geloven dan techbedrijven
Stel je voor dat bedrijven technologie bouwen die fundamenteel racistisch is: het is bekend dat die technologie voor zwarte mensen bijna 30 procent vaker niet werkt dan voor witte mensen. Stel je vervolgens voor dat deze technologie wordt ingezet op een cruciaal gebied van je leven: je werk, onderwijs, gezondheidszorg. En stel je tot slot voor dat je een zwarte vrouw bent en dat de technologie werkt zoals verwacht: niet voor jou. Je dient een klacht in. Om vervolgens van de nationale mensenrechteninstantie te horen dat het in dit geval waarschijnlijk geen racisme was.
By Nani Jansen Reventlow for Volkskrant on October 22, 2023
Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination
Today, the Netherlands Institute for Human Rights ruled that the VU did not discriminate against bioinformatics student Robin Pocornie on the basis of race by using anti-cheating software. However, the VU has discriminated on the grounds of race when handling her complaint.
Continue reading “Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination”Uitspraak College voor de Rechten van de Mens laat zien hoe moeilijk het is om algoritmische discriminatie juridisch te bewijzen
Vandaag heeft het College van de Rechten van de Mens geoordeeld dat de VU de student bioinformatica Robin Pocornie niet heeft gediscrimineerd op basis van ras door de inzet van antispieksoftware. Wel heeft de VU verboden onderscheid op grond van ras gemaakt bij de klachtbehandeling.
Continue reading “Uitspraak College voor de Rechten van de Mens laat zien hoe moeilijk het is om algoritmische discriminatie juridisch te bewijzen”Zwarte mensen vaker niet herkend door antispieksoftware Proctorio
Gezichten van mensen met een zwarte huidskleur worden veel minder goed herkend door tentamensoftware Proctorio, blijkt uit onderzoek van RTL Nieuws. De software, die fraude moet herkennen, zoekt bij online tentamens naar het gezicht van een student. Dat zwarte gezichten beduidend slechter worden herkend, leidt tot discriminatie, zeggen deskundigen die het onderzoek van RTL Nieuws beoordeelden.
By Stan Hulsen for RTL Nieuws on October 7, 2023
Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score
Respondus, a vendor of online proctoring software, has been granted a patent for their “systems and methods for assessing data collected by automated proctoring.” The patent shows that their example method for calculating a risk score is adjusted on the basis of people’s skin colour.
Continue reading “Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score”Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?
In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.
Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”Another false facial recognition match: pregnant woman wrongfully arrested
The police in America is using facial recognition software to match security footage of crimes to people. Kashmir Hill describes for the New York Times another example of a wrong match leading to a wrongful arrest.
Continue reading “Another false facial recognition match: pregnant woman wrongfully arrested”The Best Algorithms Still Struggle to Recognize Black Faces
US government tests find even top-performing facial recognition systems misidentify blacks at rates 5 to 10 times higher than they do whites.
By Tom Simonite for WIRED on July 22, 2019
Eight Months Pregnant and Arrested After False Facial Recognition Match
Porcha Woodruff thought the police who showed up at her door to arrest her for carjacking were joking. She is the first woman known to be wrongfully accused as a result of facial recognition technology.
By Kashmir Hill for The New York Times on August 6, 2023
Current state of research: Face detection still has problems with darker faces
Scientific research on the quality of face detection systems keeps finding the same result: no matter how, when, and with which system testing is done, every time it is found that faces of people with a darker skin tone are not detected as well as the faces of people with a lighter skin tone.
Continue reading “Current state of research: Face detection still has problems with darker faces”Vooral vrouwen van kleur klagen de vooroordelen van AI aan
Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.
By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023
France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?
Many governments are using mass surveillance to support law enforcement for the purposes of safety and security. In France, the French Parliament (and before, the French Senate) have approved the use of automated behavioural video surveillance at the 2024 Paris Olympics. Simply put, France wants to legalise mass surveillance at the national level which can violate many rights, such as the freedom of assembly and association, privacy, and non-discrimination.
Continue reading “France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?”How a New Generation Is Combatting Digital Surveillance
Younger voices are using technology to respond to the needs of marginalized communities and nurture Black healing and liberation.
By Kenia Hale, Nate File and Payton Croskey for Boston Review on June 2, 2022
Your Voice is (Not) Your Passport
In summer 2021, sound artist, engineer, musician, and educator Johann Diedrick convened a panel at the intersection of racial bias, listening, and AI technology at Pioneerworks in Brooklyn, NY. Diedrick.
By Michelle Pfeifer for Sounding Out! on June 12, 2023
Countering Discriminatory e-proctoring systems
In this session, we explored how the EU Charter right to non-discrimination can be (and has been) used to fight back against discriminatory e-proctoring systems.
By Naomi Appelman and Robin Pocornie for Digital Freedom Fund on May 31, 2023
Representing skin tone, or Google’s hubris versus the simplicity of Crayola
Google wants to “help computers ‘see’ our world”, and one of their ways of battling how current AI and machine learning systems perpetuate biases is to introduce a more inclusive scale of skin tone, the ‘Monk Skin Tone Scale’.
Continue reading “Representing skin tone, or Google’s hubris versus the simplicity of Crayola”Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem
In this eloquent and haunting piece by Hito Steyerl, she weaves the ongoing narratives of the eugenicist history of statistics with its integration into machine learning. She elaborates why the attempts to eliminate bias in facial recognition technology through diversifying datasets obscures the root of the problem: machine learning and automation are fundamentally reliant on extracting and exploiting human labour.
Continue reading “Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem”Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s.
Eight years after a controversy over Black people being mislabeled as gorillas by image analysis software — and despite big advances in computer vision — tech giants still fear repeating the mistake.
By Kashmir Hill and Nico Grant for The New York Times on May 22, 2023
Skin Tone Research @ Google
Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Consensus and subjectivity of skin tone annotation for ML fairness
Skin tone is an observable characteristic that is subjective, perceived differently by individuals (e.g., depending on their location or culture) and thus is complicated to annotate. That said, the ability to reliably and accurately annotate skin tone is highly important in computer vision. This became apparent in 2018, when the Gender Shades study highlighted that computer vision systems struggled to detect people with darker skin tones, and performed particularly poorly for women with darker skin tones. The study highlights the importance for computer researchers and practitioners to evaluate their technologies across the full range of skin tones and at intersections of identities. Beyond evaluating model performance on skin tone, skin tone annotations enable researchers to measure diversity and representation in image retrieval systems, dataset collection, and image generation. For all of these applications, a collection of meaningful and inclusive skin tone annotations is key.
By Candice Schumann and Gbolahan O. Olanubi for Google AI Blog on May 15, 2023
Mean Images
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
‘Thousands of Dollars for Something I Didn’t Do’
Because of a bad facial recognition match and other hidden technology, Randal Reid spent nearly a week in jail, falsely accused of stealing purses in a state he said he had never even visited.
By Kashmir Hill and Ryan Mac for The New York Times on March 31, 2023