We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.
Continue reading “Proof for Twitter’s bias toward lighter faces”Uber-racist: Racial discrimination in dynamic pricing algorithms
Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).
Continue reading “Uber-racist: Racial discrimination in dynamic pricing algorithms”Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”The Secret Bias Hidden in Mortgage-Approval Algorithms
Even accounting for factors lenders said would explain disparities, people of color are denied mortgages at significantly higher rates than White people.
By Emmanuel Martinez and Lauren Kirchner for The Markup on August 25, 2021
Twitter’s algorithmic bias bug bounty could be the way forward, if regulators step in
Twitter opened its image cropping algorithm and gave prizes to people who could find biases in it. While interesting in itself, the program mostly reveals the impotence of regulators.
By Nicolas Kayser-Bril for AlgorithmWatch on August 17, 2021
Researchers find racial discrimination in ‘dynamic pricing’ algorithms used by Uber, Lyft, and others
A preprint study shows ride-hailing services like Uber, Lyft, and Via charge higher prices in certain neighborhoods based on racial and other biases.
By Kyle Wiggers for VentureBeat on June 12, 2020
Adam Bomb on Twitter
Just tell me the reason isn’t what I think it is, @Uber
By Adam Bomb for Twitter on August 15, 2021
Student proves Twitter algorithm ‘bias’ toward lighter, slimmer, younger faces
Company pays $3,500 to Bogdan Kulynych who demonstrated flaw in image cropping software.
By Alex Hern for The Guardian on August 10, 2021
Onderzoek door Defcon-bezoekers bevestigt vooroordelen in algoritme van Twitter
Er zitten vooroordelen in een algoritme van Twitter, dat ontdekten onderzoekers tijdens een algorithmic bias bounty-competitie op Defcon. Zo worden onder meer foto’s van ouderen en mensen met een beperking weggefilterd in Twitters croptool.
By Stephan Vegelien for Tweakers on August 10, 2021
Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Opinie: Stop algoritmen van overheid die tot discriminatie en uitsluiting leiden
Uitvoeringsdiensten gebruiken talloze ‘zwarte lijsten’ met potentiële fraudeurs. Dat kan leiden tot (indirecte) etnische profilering en nieuwe drama’s, na de toeslagenaffaire.
By Nani Jansen Reventlow for Volkskrant on July 15, 2021
Why tech needs to focus on the needs of marginalized groups
Marginalized groups are often not represented in technology development. What we need is inclusive participation to centre on the concerns of these groups.
By Nani Jansen Reventlow for The World Economic Forum on July 8, 2021
Are We Automating Racism?
Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
From YouTube on March 31, 2021
Human-in-the-loop is not the magic bullet to fix AI harms
In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.
Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware
De UvA kan het niet meer maken om proctoring in te zetten bij het afnemen van tentamens, nu duidelijk is dat de surveillance-software juist op mensen van kleur een negatieve impact heeft.
Continue reading “Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware”Apple’s emoji keyboard is reinforcing Western stereotypes
The feature associates “Africa” with the hut emoji and “China” with the dog emoji.
By Andrew Deck for Rest of World on June 15, 2021
Long overdue: Google has improved its camera app to work better for Black people
The following short video by Vox shows how white skin has always been the norm in photography. Black people didn’t start to look good on film until in the 1970s furniture makers complained to Kodak that their film didn’t render the difference between dark and light grained wood, and chocolate companies were upset that you couldn’t see the difference between dark and light chocolate.
Continue reading “Long overdue: Google has improved its camera app to work better for Black people”Demographic skews in training data create algorithmic errors
Women and people of colour are underrepresented and depicted with stereotypes.
From The Economist on June 5, 2021
An automated policing program got this man shot twice
Chicago’s predictive policing program told a man he would be involved with a shooting, but it couldn’t determine which side of the gun he would be on. Instead, it made him the victim of a violent crime.
By Matt Stroud for The Verge on May 24, 2021
Sentenced by Algorithm
Computer programs used to predict recidivism and determine prison terms have a high error rate, a secret design, and a demonstrable racial bias.
By Jed S. Rakoff for The New York Review of Books on June 10, 2021
Image classification algorithms at Apple, Google still push racist tropes
Automated systems from Apple and Google label characters with dark skins “Animals”.
By Nicolas Kayser-Bril for AlgorithmWatch on May 14, 2021
How normal am I?
Experience the world of face detection algorithms in this freaky test.
By Tijmen Schep for How Normal Am I
Racist and classist predictive policing exists in Europe too
The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.
Continue reading “Racist and classist predictive policing exists in Europe too”Racist Technology in Action: Speech recognition systems by major tech companies are biased
From Siri, to Alexa, to Google Now, voice-based virtual assistants have increasingly become ubiquitous in our daily lives. So, it is unsurprising that yet another AI technology – speech recognition systems – has been reported to be biased against black people.
Continue reading “Racist Technology in Action: Speech recognition systems by major tech companies are biased”Algorithmic discrimination in Europe : challenges and opportunities for gender equality and non-discrimination law.
This report investigates how algorithmic discrimination challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. More specifically, it examines whether and how the current gender equality and non-discrimination legislative framework in place in the EU can adequately capture and redress algorithmic discrimination. It explores the gaps and weaknesses that emerge at both the EU and national levels from the interaction between, on the one hand, the specific types of discrimination that arise when algorithms are used in decision-making systems and, on the other, the particular material and personal scope of the existing legislative framework. This report also maps out the existing legal solutions, accompanying policy measures and good practice to address and redress algorithmic discrimination both at EU and national levels. Moreover, this report proposes its own integrated set of legal, knowledge-based and technological solutions to the problem of algorithmic discrimination.
By Janneke Gerards and Raphaële Xenidis for Publication Office of the European Union on March 10, 2021
Aiming for truth, fairness, and equity in your company’s use of AI
Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media, and more.
From Federal Trade Commission on April 19, 2021
Shadow Bans, Dopamine Hits, and Viral Videos, All in the Life of TikTok Creators
A secretive algorithm that’s constantly being tweaked can turn influencers’ accounts, and their prospects, upside down.
By Dara Kerr for The Markup on April 22, 2021
Twitter will share how race and politics shape its algorithms
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
Rotterdam’s use of algorithms could lead to ethnic profiling
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”Algoritmes gemeente Rotterdam kunnen leiden tot ‘vooringenomen uitkomsten’
De algoritmes die de gemeente Rotterdam gebruikt om bijvoorbeeld uitkeringsfraude op te sporen kunnen leiden tot ‘vooringenomen uitkomsten’. Dit concludeert de Rekenkamer Rotterdam in een rapport dat donderdag verschijnt. Voorzitter Paul Hofstra legt uit wat er is misgegaan.
By Paul Hofstra and Rik Kuiper for Volkskrant on April 15, 2021
Gebruik algoritmes Rotterdam kan leiden tot vooringenomen uitkomsten
De gemeente Rotterdam maakt ter ondersteuning van haar besluitvorming gebruik van algoritmes. Hoewel er binnen de gemeente aandacht bestaat voor het ethisch gebruik van algoritmes, is het besef van de noodzaak hiervan nog niet heel wijdverbreid. Dit kan leiden tot weinig transparantie van algoritmes en vooringenomen uitkomsten, zoals bij een algoritme gericht op de bestrijding van uitkeringsfraude. Dit en meer concludeert de Rekenkamer Rotterdam in het rapport ‘Gekleurde technologie’.
From Rekenkamer Rotterdam on April 14, 2021
AI system for granting UK visas is biased, rights groups claim
Immigrant rights campaigners bring legal challenge to Home Office on algorithm that streams visa applicants.
By Henry McDonald for The Guardian on October 29, 2019
Online proctoring excludes and discriminates
The use of software to automatically detect cheating on online exams – online proctoring – has been the go-to solution for many schools and universities in response to the COVID-19 pandemic. In this article, Shea Swauger addresses some of the potential discriminatory, privacy and security harms that can impact groups of students across class, gender, race, and disability lines. Swauger provides a critique on how technologies encode “normal” bodies – cisgender, white, able-bodied, neurotypical, male – as the standard and how students who do not (or cannot) conform, are punished by it.
Continue reading “Online proctoring excludes and discriminates”Europe’s artificial intelligence blindspot: Race
Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.
By Gabriel Geiger for VICE on March 1, 2021
Automated racism: How tech can entrench bias
Dutch benefits scandal highlights need for EU scrutiny.
By Nani Jansen Reventlow for POLITICO on March 2, 2021
Can Auditing Eliminate Bias from Algorithms?
A growing industry wants to scrutinize the algorithms that govern our lives—but it needs teeth.
By Alfred Ng for The Markup on February 23, 2021
Official Information About COVID-19 Is Reaching Fewer Black People on Facebook
According to data from The Markup’s Citizen Browser project, there are major disparities in who is shown public health information about the pandemic.
By Corin Faife and Dara Kerr for The Markup on March 4, 2021
Our Bodies Encoded: Algorithmic Test Proctoring in Higher Education
Cheating is not a technological problem, but a social and pedagogical problem. Technology is often blamed for creating the conditions in which cheating proliferates and is then offered as the solution to the problem it created; both claims are false.
By Shea Swauger for Hybrid Pedagogy on April 2, 2020
Black voices bring much needed context to our data-driven society
GitHub is where people build software. More than 56 million people use GitHub to discover, fork, and contribute to over 100 million projects.
By Klint Finley for GitHub on February 18, 2021