Decode the Default

Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.

From The Internet Health Report 2020 on January 1, 2021

Google fires AI researcher Timnit Gebru

Google has fired AI researcher and ethicist Timnit Gebru after she wrote an email criticising Google’s policies around diversity while she struggled with her leadership to get a critical paper on AI published. This angered thousands of her former colleagues and academics. They pointed at the unequal treatment that Gebru received as a black woman and they were worried about the integrity of Google’s research.

Continue reading “Google fires AI researcher Timnit Gebru”

Hoe Nederland A.I. inzet voor etnisch profileren

China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.

By Florentijn van Rootselaar for OneWorld on January 14, 2021

Programmed Racism – Global Digital Cultures

This episode is part of the GDC Webinar series that took place on september 2020. How do digital technologies mediate racism? It is increasingly clear that digital technologies, including auto-complete function, facial recognition, and profiling tools are not neutral but racialized in specific ways. This webinar focuses on the different modes of programmed racism. We present historical and contemporary examples of racial bias in computational systems and learn about the potential of Civic AI. We discuss the need for a global perspective and postcolonial approaches to computation and discrimination. What research agenda is needed to address current problems and inequalities? Chair: Lonneke van der Velden, University of Amsterdam Speakers: Sennay Ghebreab,  Associate Professor of informatics, University of Amsterdam and Scientific Director of the Civic AI Lab, for civic-centered and community minded design, development and development of AI Linnet Taylor, Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT), PI of the ERC-funded Global Data Justice Project. Payal Arora, Professor and Chair in Technology, Values, and Global Media Cultures at the Erasmus School of Philosophy, Erasmus University Rotterdam and Author of the ‘Next Billion Users’ with Harvard Press.

From Spotify on November 24, 2020

Podcast Het Vraagstuk

Op papier heeft iedereen mensenrechten, maar hoe ziet dit in de praktijk eruit? In Het Vraagstuk gaat David Achter de Molen op zoek naar antwoorden op urgente vraagstukken over jouw mensenrechten.

From College voor de Rechten van de Mens

Cloud Ethics

In Cloud Ethics Louise Amoore examines how machine learning algorithms are transforming the ethics and politics of contemporary society. Conceptualizing algorithms as ethicopolitical entities that are entangled with the data attributes of people, Amoore outlines how algorithms give incomplete accounts of themselves, learn through relationships with human practices, and exist in the world in ways that exceed their source code. In these ways, algorithms and their relations to people cannot be understood by simply examining their code, nor can ethics be encoded into algorithms. Instead, Amoore locates the ethical responsibility of algorithms in the conditions of partiality and opacity that haunt both human and algorithmic decisions. To this end, she proposes what she calls cloud ethics—an approach to holding algorithms accountable by engaging with the social and technical conditions under which they emerge and operate.

By Louise Amoore for Duke University Press on May 1, 2020

Big Data’s Disparate Impact

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply […]

By Andrew D. Selbst and Solon Barocas for California Law Review on June 1, 2016

How (Not) to Test for Algorithmic Bias

Predictive and decision-making algorithms are playing an increasingly prominent role in our lives. They help determine what ads we see on social media, where police are deployed, who will be given a loan or a job, and whether someone will be released on bail or granted parole. Part of this is due to the recent rise of machine learning. But some algorithms are relatively simple and don’t involve any AI or ‘deep learning.’

By Brian Hedden for Kevin Dorst

Robot Teachers, Racist Algorithms, and Disaster Pedagogy

I have volunteered to be a guest speaker in classes this Fall. It’s really the least I can do to help teachers and students through another tough term. I spoke tonight in Dorothy Kim’s class “Race Before Race: Premodern Critical Race Studies.” Here’s a bit of what I said…

By Audrey Watters for Hack Education on September 3, 2020

Philosophers On GPT-3 (updated with replies by GPT-3)

Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On.

By Amanda Askell, Annette Zimmermann, C. Thi Nguyen, Carlos Montemayor, David Chalmers, GPT-3, Henry Shevlin, Justin Khoo, Regina Rini and Shannon Vallor for Daily Nous on July 30, 2020

Dissecting racial bias in an algorithm used to manage the health of populations

The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients (see the Perspective by Benjamin). The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.

By Brian Powers, Christine Vogeli, Sendhil Mullainathan and Ziad Obermeyer for Science on October 25, 2019

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑