Sentenced by Algorithm

Computer programs used to predict recidivism and determine prison terms have a high error rate, a secret design, and a demonstrable racial bias.

By Jed S. Rakoff for The New York Review of Books on June 10, 2021

Why EU needs to be wary that AI will increase racial profiling

This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.

By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021

Rotterdam’s use of algorithms could lead to ethnic profiling

The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.

Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”

This is the EU’s chance to stop racism in artificial intelligence

As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.

By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021

The Dutch government’s love affair with ethnic profiling

In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.

Continue reading “The Dutch government’s love affair with ethnic profiling”

Racist technology in action: Gun, or electronic device?

The answer to that question depends on your skin colour, apparently. An AlgorithmWatch reporter, Nicholas Kayser-Bril, conducted an experiment that went viral on Twitter, showing that Google Vision Cloud (a service which is based on a subset of AI known as “computer vision” that focuses on automated image labelling), labelled an image of a dark-skinned individual holding a thermometer with the word “gun”, whilst a lighter skinned individual was labelled holding an “electronic device”.

Continue reading “Racist technology in action: Gun, or electronic device?”

Google fires AI researcher Timnit Gebru

Google has fired AI researcher and ethicist Timnit Gebru after she wrote an email criticising Google’s policies around diversity while she struggled with her leadership to get a critical paper on AI published. This angered thousands of her former colleagues and academics. They pointed at the unequal treatment that Gebru received as a black woman and they were worried about the integrity of Google’s research.

Continue reading “Google fires AI researcher Timnit Gebru”

Hoe Nederland A.I. inzet voor etnisch profileren

China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.

By Florentijn van Rootselaar for OneWorld on January 14, 2021

Discriminating Systems: Gender, Race, and Power in AI

The diversity crisis in AI is well-documented and wide-reaching. It can be seen in unequal workplaces throughout industry and in academia, in the disparities in hiring and promotion, in the AI technologies that reflect and amplify biased stereotypes, and in the resurfacing of biological determinism in automated systems.

By Kate Crawford, Meredith Whittaker and Sarah Myers West for AI Now Institute on April 1, 2019

Designed to Deceive: Do These People Look Real to You?

The people in this story may look familiar, like ones you’ve seen on Facebook or Twitter or Tinder. But they don’t exist. They were born from the mind of a computer, and the technology behind them is improving at a startling pace.

By Kashmir Hill for The New York Times on November 21, 2020

Digital Ethics in Higher Education: 2020

New technologies, especially those relying on artificial intelligence or data analytics, are exciting but also present ethical challenges that deserve our attention and action. Higher education can and must lead the way.

By John O’Brien for EDUCAUSE Review on May 18, 2020

Down with (discriminating) systems

As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.

By Sarah Chander for European Digital Rights (EDRi) on September 2, 2020

Philosophers On GPT-3 (updated with replies by GPT-3)

Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On.

By Amanda Askell, Annette Zimmermann, C. Thi Nguyen, Carlos Montemayor, David Chalmers, GPT-3, Henry Shevlin, Justin Khoo, Regina Rini and Shannon Vallor for Daily Nous on July 30, 2020

Kunstmatig racisme

Terwijl het besef dat algoritmes niet heilig zijn overal begint door te dringen, gaan wij ze op zeer gevaarlijke manier gebruiken.

By Hasna El Maroudi for Joop on June 19, 2019

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑