The Markup has published an overview of the ways in which algorithms have been given decisional powers in 2020 and have taken a wrong turn.
Continue reading “A year of algorithms behaving badly”Google apologizes after its Vision AI produced racist results
A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.
By Nicolas Kayser-Bril for AlgorithmWatch on April 7, 2020
Machine learning is a honeypot for phrenologists
When we say that “an algorithm is biased” we usually mean, “biased people made an algorithm.” This explains why so much machine learning prediction turns into phrenology.
By Cory Doctorow for Pluralistic on January 15, 2021
Hoe Nederland A.I. inzet voor etnisch profileren
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
Timnit Gebru’s Exit From Google Exposes a Crisis in AI
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.
By Alex Hanna and Meredith Whittaker for WIRED on December 31, 2020
How our data encodes systematic racism
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect.
By Deborah Raji for MIT Technology Review on December 10, 2020
This is the Stanford vaccine algorithm that left out frontline doctors
The university hospital blamed a “very complex algorithm” for its unequal vaccine distribution plan. Here’s what went wrong.
By Eileen Guo and Karen Hao for MIT Technology Review on December 21, 2020
Algorithms Behaving Badly: 2020 Edition
Computers are being asked to make more and more weighty decisions, even as their performance reviews are troubling.
From The Markup on December 15, 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says
The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
By Karen Hao for MIT Technology Review on December 4, 2020
Programmed Racism – Global Digital Cultures
This episode is part of the GDC Webinar series that took place on september 2020. How do digital technologies mediate racism? It is increasingly clear that digital technologies, including auto-complete function, facial recognition, and profiling tools are not neutral but racialized in specific ways. This webinar focuses on the different modes of programmed racism. We present historical and contemporary examples of racial bias in computational systems and learn about the potential of Civic AI. We discuss the need for a global perspective and postcolonial approaches to computation and discrimination. What research agenda is needed to address current problems and inequalities? Chair: Lonneke van der Velden, University of Amsterdam Speakers: Sennay Ghebreab, Associate Professor of informatics, University of Amsterdam and Scientific Director of the Civic AI Lab, for civic-centered and community minded design, development and development of AI Linnet Taylor, Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT), PI of the ERC-funded Global Data Justice Project. Payal Arora, Professor and Chair in Technology, Values, and Global Media Cultures at the Erasmus School of Philosophy, Erasmus University Rotterdam and Author of the ‘Next Billion Users’ with Harvard Press.
From Spotify on November 24, 2020
Podcast Het Vraagstuk
Op papier heeft iedereen mensenrechten, maar hoe ziet dit in de praktijk eruit? In Het Vraagstuk gaat David Achter de Molen op zoek naar antwoorden op urgente vraagstukken over jouw mensenrechten.
From College voor de Rechten van de Mens
Cloud Ethics
In Cloud Ethics Louise Amoore examines how machine learning algorithms are transforming the ethics and politics of contemporary society. Conceptualizing algorithms as ethicopolitical entities that are entangled with the data attributes of people, Amoore outlines how algorithms give incomplete accounts of themselves, learn through relationships with human practices, and exist in the world in ways that exceed their source code. In these ways, algorithms and their relations to people cannot be understood by simply examining their code, nor can ethics be encoded into algorithms. Instead, Amoore locates the ethical responsibility of algorithms in the conditions of partiality and opacity that haunt both human and algorithmic decisions. To this end, she proposes what she calls cloud ethics—an approach to holding algorithms accountable by engaging with the social and technical conditions under which they emerge and operate.
By Louise Amoore for Duke University Press on May 1, 2020
Community Defense: Sarah T. Hamid on Abolishing Carceral Technologies
A conversation about how to break cages.
By Sarah T. Hamid for Logic on August 31, 2020
UN warns of impact of smart borders on refugees: ‘Data collection isn’t apolitical’
Special rapporteur on racism and xenophobia believes there is a misconception that biosurveillance technology is without bias.
By Katy Fallon for The Guardian on November 11, 2020
Alexandria Ocasio-Cortez Says Algorithms Can Be Racist. Here’s Why She’s Right.
Algorithms are written by humans, so they can reflect human biases.
By Maya Kosoff for Live Science on January 29, 2019
Decolonising Digital Rights: Why It Matters and Where Do We Start?
This speech was given by DFF director, Nani Jansen Reventlow, on 9 October as the keynote for the 2020 Anthropology + Technology Conference.
By Nani Jansen Reventlow for Digital Freedom Fund on October 23, 2020
Call for 2021-2022 Faculty Fellows: Race and Technology
Data & Society is assembling its eighth class of fellows to join us for 10 months, starting September 1, 2021.
From Data & Society on October 14, 2020
Nederland heeft een algoritmewaakhond nodig
Privacy: Ondanks de toeslagenaffaire blijft de overheid dubieuze algoritmes gebruiken, ziet Dagmar Oudshoorn. Tijd voor een toezichthouder.
By Dagmar Oudshoorn for NRC on October 14, 2020
Big Data’s Disparate Impact
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply […]
By Andrew D. Selbst and Solon Barocas for California Law Review on June 1, 2016
Ja, gezichtsherkenningstechnologie discrimineert – maar een verbod is niet de oplossing
Zoals de dood van George Floyd leidde tot wereldwijde protesten, zo deed de vooringenomen beeldverwerkingstechnologie PULSE dat in de wetenschappelijke wereld. Er werd opgeroepen tot een verbod, maar neuro-informaticus Sennay Ghebreab vraagt zich af of een digitale beeldenstorm het probleem oplost.
By Sennay Ghebreab for Vrij Nederland on October 5, 2020
How (Not) to Test for Algorithmic Bias
Predictive and decision-making algorithms are playing an increasingly prominent role in our lives. They help determine what ads we see on social media, where police are deployed, who will be given a loan or a job, and whether someone will be released on bail or granted parole. Part of this is due to the recent rise of machine learning. But some algorithms are relatively simple and don’t involve any AI or ‘deep learning.’
By Brian Hedden for Kevin Dorst
Hoe algoritmes discriminerend leren denken (en hoe we dat oplossen)
Twitter lijkt witte mensen op foto’s eerder uit te lichten dan zwarte mensen, bleek deze week uit tests van gebruikers. Media schreven over “racistische algoritmes”, maar kunnen we dat wel zo noemen? En hoe ontstaat discriminatie in computersystemen?
By Rutger Otto for NU.nl on September 25, 2020
‘In de Tweede Wereldoorlog hadden we wél wat te verbergen’
Welke lessen over privacy kunnen we nu trekken uit de aanslag op het Amsterdamse bevolkingsregister in 1943? ‘Vanuit een gebrek aan vrijheid krijg je een helderder perspectief op wat vrijheid betekent.’
By Hans de Zwart for De Correspondent on May 8, 2014
Robot Teachers, Racist Algorithms, and Disaster Pedagogy
I have volunteered to be a guest speaker in classes this Fall. It’s really the least I can do to help teachers and students through another tough term. I spoke tonight in Dorothy Kim’s class “Race Before Race: Premodern Critical Race Studies.” Here’s a bit of what I said…
By Audrey Watters for Hack Education on September 3, 2020
Data-Informed Predictive Policing Was Heralded As Less Biased. Is It?
Critics say it merely techwashes injustice.
By Annie Gilbertson for The Markup on August 20, 2020
UK ditches exam results generated by biased algorithm after student protests
The UK government has said that students in England and Wales will no longer receive exam results based on a controversial algorithm. The system developed by exam regulator Ofqual was accused of being biased.
By Jon Porter for The Verge on August 17, 2020
England A-level downgrades hit pupils from disadvantaged areas hardest
Analysis also shows pupils at private schools benefited most from algorithm.
By Niamh McIntyre and Richard Adams for The Guardian on August 13, 2020
Who won and who lost: when A-levels meet the algorithm
Disadvantaged students among those more likely to have received lower grades than predicted.
By Cath Levett, Niamh McIntyre, Pamela Duncan and Rhi Storer for The Guardian on August 13, 2020
Philosophers On GPT-3 (updated with replies by GPT-3)
Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On.
By Amanda Askell, Annette Zimmermann, C. Thi Nguyen, Carlos Montemayor, David Chalmers, GPT-3, Henry Shevlin, Justin Khoo, Regina Rini and Shannon Vallor for Daily Nous on July 30, 2020
Instagram ‘censorship’ of black model’s photo reignites claims of race bias
#IwanttoseeNyome outcry after social media platform repeatedly removes pictures of Nyome Nicholas-Williams.
By Nosheen Iqbal for The Guardian on August 9, 2020
Algorithms Can Be a Tool For Justice—If Used the Right Way
Companies like Netflix, Facebook, and Uber deploy algorithms in search of greater efficiency. But when used to evaluate the powerful systems that judge us, algorithms can spur social progress in ways nothing else can.
By Noam Cohen for WIRED on October 25, 2018
Dissecting racial bias in an algorithm used to manage the health of populations
The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients (see the Perspective by Benjamin). The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.
By Brian Powers, Christine Vogeli, Sendhil Mullainathan and Ziad Obermeyer for Science on October 25, 2019
Algorithmic Justice League – Unmasking AI harms and biases
Artificial intelligence can amplify racism, sexism, and other forms of discrimination. We deserve more accountable and equitable AI.
From Algorithmic Justice League
Google Ad Portal Equated “Black Girls” with Porn
Searching Google’s ad buying portal for “Black girls” returned hundreds of terms leading to “adult content”
By Aaron Sankin and Leon Yin for The Markup on July 23, 2020