Chicago’s predictive policing program told a man he would be involved with a shooting, but it couldn’t determine which side of the gun he would be on. Instead, it made him the victim of a violent crime.
By Matt Stroud for The Verge on May 24, 2021
Chicago’s predictive policing program told a man he would be involved with a shooting, but it couldn’t determine which side of the gun he would be on. Instead, it made him the victim of a violent crime.
By Matt Stroud for The Verge on May 24, 2021
Computer programs used to predict recidivism and determine prison terms have a high error rate, a secret design, and a demonstrable racial bias.
By Jed S. Rakoff for The New York Review of Books on June 10, 2021
Automated systems from Apple and Google label characters with dark skins “Animals”.
By Nicolas Kayser-Bril for AlgorithmWatch on May 14, 2021
Experience the world of face detection algorithms in this freaky test.
By Tijmen Schep for How Normal Am I
The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.
Continue reading “Racist and classist predictive policing exists in Europe too”From Siri, to Alexa, to Google Now, voice-based virtual assistants have increasingly become ubiquitous in our daily lives. So, it is unsurprising that yet another AI technology – speech recognition systems – has been reported to be biased against black people.
Continue reading “Racist Technology in Action: Speech recognition systems by major tech companies are biased”This report investigates how algorithmic discrimination challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. More specifically, it examines whether and how the current gender equality and non-discrimination legislative framework in place in the EU can adequately capture and redress algorithmic discrimination. It explores the gaps and weaknesses that emerge at both the EU and national levels from the interaction between, on the one hand, the specific types of discrimination that arise when algorithms are used in decision-making systems and, on the other, the particular material and personal scope of the existing legislative framework. This report also maps out the existing legal solutions, accompanying policy measures and good practice to address and redress algorithmic discrimination both at EU and national levels. Moreover, this report proposes its own integrated set of legal, knowledge-based and technological solutions to the problem of algorithmic discrimination.
By Janneke Gerards and Raphaële Xenidis for Publication Office of the European Union on March 10, 2021
Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media, and more.
From Federal Trade Commission on April 19, 2021
A secretive algorithm that’s constantly being tweaked can turn influencers’ accounts, and their prospects, upside down.
By Dara Kerr for The Markup on April 22, 2021
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”De algoritmes die de gemeente Rotterdam gebruikt om bijvoorbeeld uitkeringsfraude op te sporen kunnen leiden tot ‘vooringenomen uitkomsten’. Dit concludeert de Rekenkamer Rotterdam in een rapport dat donderdag verschijnt. Voorzitter Paul Hofstra legt uit wat er is misgegaan.
By Paul Hofstra and Rik Kuiper for Volkskrant on April 15, 2021
De gemeente Rotterdam maakt ter ondersteuning van haar besluitvorming gebruik van algoritmes. Hoewel er binnen de gemeente aandacht bestaat voor het ethisch gebruik van algoritmes, is het besef van de noodzaak hiervan nog niet heel wijdverbreid. Dit kan leiden tot weinig transparantie van algoritmes en vooringenomen uitkomsten, zoals bij een algoritme gericht op de bestrijding van uitkeringsfraude. Dit en meer concludeert de Rekenkamer Rotterdam in het rapport ‘Gekleurde technologie’.
From Rekenkamer Rotterdam on April 14, 2021
Immigrant rights campaigners bring legal challenge to Home Office on algorithm that streams visa applicants.
By Henry McDonald for The Guardian on October 29, 2019
The use of software to automatically detect cheating on online exams – online proctoring – has been the go-to solution for many schools and universities in response to the COVID-19 pandemic. In this article, Shea Swauger addresses some of the potential discriminatory, privacy and security harms that can impact groups of students across class, gender, race, and disability lines. Swauger provides a critique on how technologies encode “normal” bodies – cisgender, white, able-bodied, neurotypical, male – as the standard and how students who do not (or cannot) conform, are punished by it.
Continue reading “Online proctoring excludes and discriminates”Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.
By Gabriel Geiger for VICE on March 1, 2021
Dutch benefits scandal highlights need for EU scrutiny.
By Nani Jansen Reventlow for POLITICO on March 2, 2021
A growing industry wants to scrutinize the algorithms that govern our lives—but it needs teeth.
By Alfred Ng for The Markup on February 23, 2021
According to data from The Markup’s Citizen Browser project, there are major disparities in who is shown public health information about the pandemic.
By Corin Faife and Dara Kerr for The Markup on March 4, 2021
Cheating is not a technological problem, but a social and pedagogical problem. Technology is often blamed for creating the conditions in which cheating proliferates and is then offered as the solution to the problem it created; both claims are false.
By Shea Swauger for Hybrid Pedagogy on April 2, 2020
GitHub is where people build software. More than 56 million people use GitHub to discover, fork, and contribute to over 100 million projects.
By Klint Finley for GitHub on February 18, 2021
In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.
Continue reading “The Dutch government’s love affair with ethnic profiling”The answer to that question depends on your skin colour, apparently. An AlgorithmWatch reporter, Nicholas Kayser-Bril, conducted an experiment that went viral on Twitter, showing that Google Vision Cloud (a service which is based on a subset of AI known as “computer vision” that focuses on automated image labelling), labelled an image of a dark-skinned individual holding a thermometer with the word “gun”, whilst a lighter skinned individual was labelled holding an “electronic device”.
Continue reading “Racist technology in action: Gun, or electronic device?”The human-centered approach that can combat algorithmic bias.
By Jessie Daniels for Quartz on April 3, 2019
Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.
From The Internet Health Report 2020 on January 1, 2021
Adolescents spend ever greater portions of their days online and are especially vulnerable to discrimination. That’s a worrying combination.
By Avriel Epps-Darling for The Atlantic on October 24, 2020
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.
Continue reading “Racism and “Smart Borders””Google has fired AI researcher and ethicist Timnit Gebru after she wrote an email criticising Google’s policies around diversity while she struggled with her leadership to get a critical paper on AI published. This angered thousands of her former colleagues and academics. They pointed at the unequal treatment that Gebru received as a black woman and they were worried about the integrity of Google’s research.
Continue reading “Google fires AI researcher Timnit Gebru”The Markup has published an overview of the ways in which algorithms have been given decisional powers in 2020 and have taken a wrong turn.
Continue reading “A year of algorithms behaving badly”A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.
By Nicolas Kayser-Bril for AlgorithmWatch on April 7, 2020
When we say that “an algorithm is biased” we usually mean, “biased people made an algorithm.” This explains why so much machine learning prediction turns into phrenology.
By Cory Doctorow for Pluralistic on January 15, 2021
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.
By Alex Hanna and Meredith Whittaker for WIRED on December 31, 2020
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect.
By Deborah Raji for MIT Technology Review on December 10, 2020
The university hospital blamed a “very complex algorithm” for its unequal vaccine distribution plan. Here’s what went wrong.
By Eileen Guo and Karen Hao for MIT Technology Review on December 21, 2020
Computers are being asked to make more and more weighty decisions, even as their performance reviews are troubling.
From The Markup on December 15, 2020
The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
By Karen Hao for MIT Technology Review on December 4, 2020
This episode is part of the GDC Webinar series that took place on september 2020. How do digital technologies mediate racism? It is increasingly clear that digital technologies, including auto-complete function, facial recognition, and profiling tools are not neutral but racialized in specific ways. This webinar focuses on the different modes of programmed racism. We present historical and contemporary examples of racial bias in computational systems and learn about the potential of Civic AI. We discuss the need for a global perspective and postcolonial approaches to computation and discrimination. What research agenda is needed to address current problems and inequalities? Chair: Lonneke van der Velden, University of Amsterdam Speakers: Sennay Ghebreab, Associate Professor of informatics, University of Amsterdam and Scientific Director of the Civic AI Lab, for civic-centered and community minded design, development and development of AI Linnet Taylor, Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT), PI of the ERC-funded Global Data Justice Project. Payal Arora, Professor and Chair in Technology, Values, and Global Media Cultures at the Erasmus School of Philosophy, Erasmus University Rotterdam and Author of the ‘Next Billion Users’ with Harvard Press.
From Spotify on November 24, 2020
Op papier heeft iedereen mensenrechten, maar hoe ziet dit in de praktijk eruit? In Het Vraagstuk gaat David Achter de Molen op zoek naar antwoorden op urgente vraagstukken over jouw mensenrechten.
From College voor de Rechten van de Mens
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.