Researchers at the Allen Institute for AI created Ask Delphi to make ethical judgments — but it turned out to be awfully bigoted and racist instead.
By Tony Tran for Futurism on October 22, 2021
Researchers at the Allen Institute for AI created Ask Delphi to make ethical judgments — but it turned out to be awfully bigoted and racist instead.
By Tony Tran for Futurism on October 22, 2021
Health secretary signs up to hi-tech schemes countering health disparities and reflecting minority ethnic groups’ data.
By Andrew Gregory for The Guardian on October 20, 2021
Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.
By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.
Continue reading “Big Tech is propped up by a globally exploited workforce”The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
The Biden administration must prioritize and address all the ways that AI and technology can exacerbate racial and other inequities.
By Olga Akselrod for American Civil Liberties Union (ACLU) on July 13, 2021
Facebook called it “an unacceptable error.” The company has struggled with other issues related to race.
By Ryan Mac for The New York Times on September 3, 2021
Mijn achternaam draag ik met trots: Ibrahim is de voornaam van mijn overgrootopa, die mijn vader invulde toen hij in de jaren ’70 vanuit Egypte naar Nederland kwam. Mijn vader is helaas overleden, het was de meest warme en hartelijke man die je je kan inbeelden. Maar er is iets geks aan de hand met mijn achternaam: als een computer een taal leert met behulp van alledaagse teksten van internet, blijkt dat de computer niet-westerse namen zoals Ibrahim als minder ‘plezierig’ aanduidt dan westerse achternamen (Caliskan et al., 2017).
By Meldrid Ibrahim for Mister Motley on August 24, 2021
A preprint study shows ride-hailing services like Uber, Lyft, and Via charge higher prices in certain neighborhoods based on racial and other biases.
By Kyle Wiggers for VentureBeat on June 12, 2020
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
From YouTube on March 31, 2021
Working to break down the barriers keeping young Black people from careers in AI.
By Abby Ohlheiser for MIT Technology Review on June 30, 2021
She employs AI to get to the roots of health disparities across race, gender, and class.
By Neel V. Patel for MIT Technology Review on June 30, 2021
In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.
Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”Humans are being tasked with overseeing algorithms that were put in place with the promise of augmenting human deficiencies.
By Amba Kak and Ben Green for Slate Magazine on June 15, 2021
For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
In a recent interview with The Guardian, Kate Crawford discusses her new book, Atlas AI, that delves into the broader landscape of how AI systems work by canvassing the structures of production and material realities. One example is ImageNet, a massive training dataset created by researchers from Stanford, that is used to test whether object recognition algorithms are efficient. It was made by scraping photos and images across the web and hiring crowd workers to label them according to an outdated lexical database created in the 1980s.
Continue reading “AI and its hidden costs”In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”Successful and ethical artificial intelligence programs take into account behind-the-scenes ‘repair work’ and ‘ghost workers.’
By Sara Brown for MIT Sloan on May 4, 2021
The AI researcher on how natural resources and human labour drive machine learning and the regressive stereotypes that are baked into its algorithms.
By Kate Crawford for The Guardian on June 6, 2021
Women and people of colour are underrepresented and depicted with stereotypes.
From The Economist on June 5, 2021
Computer programs used to predict recidivism and determine prison terms have a high error rate, a secret design, and a demonstrable racial bias.
By Jed S. Rakoff for The New York Review of Books on June 10, 2021
Automated systems from Apple and Google label characters with dark skins “Animals”.
By Nicolas Kayser-Bril for AlgorithmWatch on May 14, 2021
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media, and more.
From Federal Trade Commission on April 19, 2021
This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.
By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”Algorithm systematically removes their content or limits how much it can earn from advertising, they allege.
By Reed Albergotti for Washington Post on June 18, 2020
As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
GitHub is where people build software. More than 56 million people use GitHub to discover, fork, and contribute to over 100 million projects.
By Klint Finley for GitHub on February 18, 2021
In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.
Continue reading “The Dutch government’s love affair with ethnic profiling”The answer to that question depends on your skin colour, apparently. An AlgorithmWatch reporter, Nicholas Kayser-Bril, conducted an experiment that went viral on Twitter, showing that Google Vision Cloud (a service which is based on a subset of AI known as “computer vision” that focuses on automated image labelling), labelled an image of a dark-skinned individual holding a thermometer with the word “gun”, whilst a lighter skinned individual was labelled holding an “electronic device”.
Continue reading “Racist technology in action: Gun, or electronic device?”Wat kunnen universiteiten leren van Antoine Griezmann, schaduwspits van FC Barcelona? Mensenrechten zowel in woord als in daad nastreven, vinden Joshua B. Cohen en Assamaual Saidi, en daar is nog een wereld te winnen.
By Assamaual Saidi and Joshua B. Cohen for Het Parool on February 11, 2021
Adolescents spend ever greater portions of their days online and are especially vulnerable to discrimination. That’s a worrying combination.
By Avriel Epps-Darling for The Atlantic on October 24, 2020
Google has fired AI researcher and ethicist Timnit Gebru after she wrote an email criticising Google’s policies around diversity while she struggled with her leadership to get a critical paper on AI published. This angered thousands of her former colleagues and academics. They pointed at the unequal treatment that Gebru received as a black woman and they were worried about the integrity of Google’s research.
Continue reading “Google fires AI researcher Timnit Gebru”Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.