Author Tarcízio Silva on how algorithmic racism exposes the myth of “racial democracy.”
By Alex González Ormerod and Tarcízio Silva for Rest of World on April 22, 2022
Author Tarcízio Silva on how algorithmic racism exposes the myth of “racial democracy.”
By Alex González Ormerod and Tarcízio Silva for Rest of World on April 22, 2022
The startup promises a fairly-distributed, cryptocurrency-based universal basic income. So far all it’s done is build a biometric database from the bodies of the poor.
By Adi Renaldi, Antoaneta Rouss, Eileen Guo and Lujain Alsedeg for MIT Technology Review on April 6, 2022
We belief that software used for monitoring students during online tests (so-called proctoring software) should be abolished because it discriminates against students with a darker skin colour.
Continue reading “How our world is designed for the ‘reference man’ and why proctoring should be abolished”This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.
Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”ADCU has launched legal action against Uber over the unfair dismissal of a driver and a courier after the company’s facial recognition system failed to identify them.
By James Farrar, Paul Jennings and Yaseen Aslam for The App Drivers and Couriers Union on October 6, 2021
Activists say the biometric tools, developed principally around white datasets, risk reinforcing racist practices.
By Charlotte Peet for Rest of World on October 22, 2021
Chris Gilliard grew up with racist policing in Detroit. He sees a new form of oppression in the tech we use every day.
By Chris Gilliard and Will Oremus for Washington Post on September 17, 2021
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Marginalized groups are often not represented in technology development. What we need is inclusive participation to centre on the concerns of these groups.
By Nani Jansen Reventlow for The World Economic Forum on July 8, 2021
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”The University of Amsterdam can no longer justify the use of proctoring software for remote examinations now that we know that it has a negative impact on people of colour.
Continue reading “Call to the University of Amsterdam: Stop using racist proctoring software”De UvA kan het niet meer maken om proctoring in te zetten bij het afnemen van tentamens, nu duidelijk is dat de surveillance-software juist op mensen van kleur een negatieve impact heeft.
Continue reading “Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware”Surveillancesoftware benadeelt mensen van kleur, blijkt uit onderzoek. Waarom gebruikt de UvA het dan nog, vragen Naomi Appelman, Jill Toh en Hans de Zwart.
By Hans de Zwart, Jill Toh and Naomi Appelman for Het Parool on July 6, 2021
Gwendoline Delbos-Corfield MEP in conversation with Laurence Meyer, from the Digital Freedom Fund, about the dangers of the increasing use of biometric mass surveillance – both within the EU and outside it, as well as the impact it can have on the lives of people who are already being discriminated against.
By Gwendoline Delbos-Corfield and Laurence Meyer for Greens/EFA on June 24, 2021
A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition.
By Kashmir Hill for The New York Times on December 29, 2020
For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
Ein Fotoautomat des Hamburger Landesbetriebs Verkehr erkennt offenbar keine Schwarzen. Eine Hamburgerin konnte darum im Dezember keinen internationalen Führerschein beantragen.
From NDR.de on July 25, 2020
Experience the world of face detection algorithms in this freaky test.
By Tijmen Schep for How Normal Am I
Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.
By Taylor Hatmaker for TechCrunch on May 6, 2021
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.
Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”A student researcher has reverse-engineered the controversial exam software—and discovered a tool infamous for failing to recognize non-white faces.
By Todd Feathers for VICE on April 8, 2021
Facial recognition technology (FRT) has been widely studied and criticized for its racialising impacts and its role in the overpolicing of minoritised communities. However, a key aspect of facial recognition technologies is the dataset of faces used for training and testing. In this article, we situate FRT as an infrastructural assemblage and focus on the history of four facial recognition datasets: the original dataset created by W.W. Bledsoe and his team at the Panoramic Research Institute in 1963; the FERET dataset collected by the Army Research Laboratory in 1995; MEDS-I (2009) and MEDS-II (2011), the datasets containing dead arrestees, curated by the MITRE Corporation; and the Diversity in Faces dataset, created in 2019 by IBM. Through these four exemplary datasets, we suggest that the politics of race in facial recognition are about far more than simply representation, raising questions about the potential side-effects and limitations of efforts to simply ‘de-bias’ data.
By Nikki Stevens and Os Keyes for Taylor & Francis Online on March 26, 2021
As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
The use of software to automatically detect cheating on online exams – online proctoring – has been the go-to solution for many schools and universities in response to the COVID-19 pandemic. In this article, Shea Swauger addresses some of the potential discriminatory, privacy and security harms that can impact groups of students across class, gender, race, and disability lines. Swauger provides a critique on how technologies encode “normal” bodies – cisgender, white, able-bodied, neurotypical, male – as the standard and how students who do not (or cannot) conform, are punished by it.
Continue reading “Online proctoring excludes and discriminates”Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
A growing industry wants to scrutinize the algorithms that govern our lives—but it needs teeth.
By Alfred Ng for The Markup on February 23, 2021
Cheating is not a technological problem, but a social and pedagogical problem. Technology is often blamed for creating the conditions in which cheating proliferates and is then offered as the solution to the problem it created; both claims are false.
By Shea Swauger for Hybrid Pedagogy on April 2, 2020
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
Face recognition is a threat to not only our constitutional rights, but to people of color and other marginalized groups who are more likely to be misidentified — and bear the consequences.
By Kate Ruane for American Civil Liberties Union (ACLU) on February 17, 2021
Wat kunnen universiteiten leren van Antoine Griezmann, schaduwspits van FC Barcelona? Mensenrechten zowel in woord als in daad nastreven, vinden Joshua B. Cohen en Assamaual Saidi, en daar is nog een wereld te winnen.
By Assamaual Saidi and Joshua B. Cohen for Het Parool on February 11, 2021
Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.
From The Internet Health Report 2020 on January 1, 2021
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.
Continue reading “Racism and “Smart Borders””A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.
Continue reading “Racist technology in action: Cropping out the non-white”A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.
By Nicolas Kayser-Bril for AlgorithmWatch on April 7, 2020
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
How many more Black men will be wrongfully arrested before this country puts a stop to the unregulated use of facial recognition software?
From Washington Post on December 31, 2020
Yup, still racist.
By Anthony Tordillos for Twitter on December 7, 2020
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.