Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”Continue reading “Are we automating racism?”
Marginalized groups are often not represented in technology development. What we need is inclusive participation to centre on the concerns of these groups.
By Nani Jansen Reventlow for The World Economic Forum on July 8, 2021
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”
The University of Amsterdam can no longer justify the use of proctoring software for remote examinations now that we know that it has a negative impact on people of colour.Continue reading “Call to the University of Amsterdam: Stop using racist proctoring software”
De UvA kan het niet meer maken om proctoring in te zetten bij het afnemen van tentamens, nu duidelijk is dat de surveillance-software juist op mensen van kleur een negatieve impact heeft.Continue reading “Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware”
Surveillancesoftware benadeelt mensen van kleur, blijkt uit onderzoek. Waarom gebruikt de UvA het dan nog, vragen Naomi Appelman, Jill Toh en Hans de Zwart.
By Hans de Zwart, Jill Toh and Naomi Appelman for Het Parool on July 6, 2021
Gwendoline Delbos-Corfield MEP in conversation with Laurence Meyer, from the Digital Freedom Fund, about the dangers of the increasing use of biometric mass surveillance – both within the EU and outside it, as well as the impact it can have on the lives of people who are already being discriminated against.
By Gwendoline Delbos-Corfield and Laurence Meyer for Greens/EFA on June 24, 2021
A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition.
By Kashmir Hill for The New York Times on December 29, 2020
For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
Ein Fotoautomat des Hamburger Landesbetriebs Verkehr erkennt offenbar keine Schwarzen. Eine Hamburgerin konnte darum im Dezember keinen internationalen Führerschein beantragen.
From NDR.de on July 25, 2020
Experience the world of face detection algorithms in this freaky test.
By Tijmen Schep for How Normal Am I
Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.
By Taylor Hatmaker for TechCrunch on May 6, 2021
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”
A student researcher has reverse-engineered the controversial exam software—and discovered a tool infamous for failing to recognize non-white faces.
By Todd Feathers for VICE on April 8, 2021
Facial recognition technology (FRT) has been widely studied and criticized for its racialising impacts and its role in the overpolicing of minoritised communities. However, a key aspect of facial recognition technologies is the dataset of faces used for training and testing. In this article, we situate FRT as an infrastructural assemblage and focus on the history of four facial recognition datasets: the original dataset created by W.W. Bledsoe and his team at the Panoramic Research Institute in 1963; the FERET dataset collected by the Army Research Laboratory in 1995; MEDS-I (2009) and MEDS-II (2011), the datasets containing dead arrestees, curated by the MITRE Corporation; and the Diversity in Faces dataset, created in 2019 by IBM. Through these four exemplary datasets, we suggest that the politics of race in facial recognition are about far more than simply representation, raising questions about the potential side-effects and limitations of efforts to simply ‘de-bias’ data.
By Nikki Stevens and Os Keyes for Taylor & Francis Online on March 26, 2021
As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
The use of software to automatically detect cheating on online exams – online proctoring – has been the go-to solution for many schools and universities in response to the COVID-19 pandemic. In this article, Shea Swauger addresses some of the potential discriminatory, privacy and security harms that can impact groups of students across class, gender, race, and disability lines. Swauger provides a critique on how technologies encode “normal” bodies – cisgender, white, able-bodied, neurotypical, male – as the standard and how students who do not (or cannot) conform, are punished by it.Continue reading “Online proctoring excludes and discriminates”
Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
A growing industry wants to scrutinize the algorithms that govern our lives—but it needs teeth.
By Alfred Ng for The Markup on February 23, 2021
Cheating is not a technological problem, but a social and pedagogical problem. Technology is often blamed for creating the conditions in which cheating proliferates and is then offered as the solution to the problem it created; both claims are false.
By Shea Swauger for Hybrid Pedagogy on April 2, 2020
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
Face recognition is a threat to not only our constitutional rights, but to people of color and other marginalized groups who are more likely to be misidentified — and bear the consequences.
By Kate Ruane for American Civil Liberties Union (ACLU) on February 17, 2021
Wat kunnen universiteiten leren van Antoine Griezmann, schaduwspits van FC Barcelona? Mensenrechten zowel in woord als in daad nastreven, vinden Joshua B. Cohen en Assamaual Saidi, en daar is nog een wereld te winnen.
By Assamaual Saidi and Joshua B. Cohen for Het Parool on February 11, 2021
Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.
From The Internet Health Report 2020 on January 1, 2021
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.Continue reading “Racism and “Smart Borders””
A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.Continue reading “Racist technology in action: Cropping out the non-white”
A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.
By Nicolas Kayser-Bril for AlgorithmWatch on April 7, 2020
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
How many more Black men will be wrongfully arrested before this country puts a stop to the unregulated use of facial recognition software?
From Washington Post on December 31, 2020
This episode is part of the GDC Webinar series that took place on september 2020. How do digital technologies mediate racism? It is increasingly clear that digital technologies, including auto-complete function, facial recognition, and profiling tools are not neutral but racialized in specific ways. This webinar focuses on the different modes of programmed racism. We present historical and contemporary examples of racial bias in computational systems and learn about the potential of Civic AI. We discuss the need for a global perspective and postcolonial approaches to computation and discrimination. What research agenda is needed to address current problems and inequalities? Chair: Lonneke van der Velden, University of Amsterdam Speakers: Sennay Ghebreab, Associate Professor of informatics, University of Amsterdam and Scientific Director of the Civic AI Lab, for civic-centered and community minded design, development and development of AI Linnet Taylor, Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT), PI of the ERC-funded Global Data Justice Project. Payal Arora, Professor and Chair in Technology, Values, and Global Media Cultures at the Erasmus School of Philosophy, Erasmus University Rotterdam and Author of the ‘Next Billion Users’ with Harvard Press.
From Spotify on November 24, 2020
The people in this story may look familiar, like ones you’ve seen on Facebook or Twitter or Tinder. But they don’t exist. They were born from the mind of a computer, and the technology behind them is improving at a startling pace.
By Kashmir Hill for The New York Times on November 21, 2020
Special rapporteur on racism and xenophobia believes there is a misconception that biosurveillance technology is without bias.
By Katy Fallon for The Guardian on November 11, 2020
In this interview with Jair Schalkwijk and Naomi Appelman, we try to bring some transparency to the use of facial recognition technologies in law enforcement.
By Margarita Osipian for The Hmm on October 8, 2020
Zoals de dood van George Floyd leidde tot wereldwijde protesten, zo deed de vooringenomen beeldverwerkingstechnologie PULSE dat in de wetenschappelijke wereld. Er werd opgeroepen tot een verbod, maar neuro-informaticus Sennay Ghebreab vraagt zich af of een digitale beeldenstorm het probleem oplost.
By Sennay Ghebreab for Vrij Nederland on October 5, 2020
Users highlight examples of feature automatically focusing on white faces over black ones.
By Alex Hern for The Guardian on September 21, 2020