An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.
Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”Reference man
Ontmoet Reference man: een witte man, zo’n 1.75m lang en ongeveer 80 kilo. En onze wereld is afgestemd, getest en gebouwd op hem. Dat is soms bijna knullig, maar af en toe ook levensbedreigend. In dit vierluik neemt Sophie Frankenmolen de kijker mee in haar onderzoek naar dit bizarre fenomeen.
By Sophie Frankenmolen for NPO Start on January 13, 2022
Technologies of Black Freedoms: Calling On Black Studies Scholars, with SA Smythe
Refusing to see like a state.
By J. Khadijah Abdurahman and SA Smythe for Logic on December 25, 2022
The Humanities Can’t Save Big Tech From Itself
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
Predictive policing reinforces and accelerates racial bias
The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.
Continue reading “Predictive policing reinforces and accelerates racial bias”Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing
The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), has fined the Dutch Tax Agency 2.75 milion euros for discriminatory data processing as part of the child benefits scandal.
Continue reading “Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing”Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them
Millions of crime predictions left on an unsecured server show PredPol mostly avoided Whiter neighborhoods, targeted Black and Latino neighborhoods.
By Aaron Sankin, Annie Gilbertson, Dhruv Mehrotra and Surya Mattu for The Markup on December 2, 2021
Shirley Cards
Photographer Ibarionex Perello recalls how school picture day would go back in the 1970s at the Catholic school he attended in South Los Angeles. He recalls that kids would file into the school auditorium in matching uniforms. They’d sit on a stool, the photographer would snap a couple images, and that would be it. But when the pictures came back weeks later, Perello always noticed that the kids with lighter skin tones looked better — or at least more like themselves. Those with darker skin tones looked to be hidden in shadows. That experience stuck with him, but he didn’t realize why this was happening until later in his life.
From 99% Invisible on November 8, 2021
Discriminating Data
How big data and machine learning encode discrimination and create agitated clusters of comforting rage.
By Wendy Hui Kyong Chun for The MIT Press on November 1, 2021
Amnesty’s grim warning against another ‘Toeslagenaffaire’
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”Racist Technology in Action: Facebook labels black men as ‘primates’
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.
Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal
Social security enforcement agencies worldwide are increasingly automating their processes in the hope of detecting fraud. The Netherlands is at the forefront of this development. The Dutch tax authorities adopted an algorithmic decision-making system to create risk profiles of individuals applying for childcare benefits in order to detect inaccurate and potentially fraudulent applications at an early stage. Nationality was one of the risk factors used by the tax authorities to assess the risk of inaccuracy and/or fraud in the applications submitted. This report illustrates how the use of individuals’ nationality resulted in discrimination as well as racial profiling.
From Amnesty International on October 25, 2021
Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’
Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.
By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019
Dr Nakeema Stefflbauer: ‘#defundbias in online hiring and listen to the people in Europe whom AI algorithms harm’
The first time I applied to work at a European company, my interviewer verbally grilled me about my ethnic origin. “Is your family from Egypt? Morocco? Are you Muslim?” asked a white Belgian man looking for a project manager. He was the CEO. My CV at the time was US-style, without a photograph, but with descriptions of research I had conducted at various Middle East and North African universities. I’d listed my nationality and my BA, MA, and PhD degrees, which confirmed my Ivy League graduate status several times over. “Are either of your parents Middle Eastern?” the CEO persisted.
By Nakeema Stefflbauer for Who Writes The Rules on August 23, 2021
Europe wants to champion human rights. So why doesn’t it police biased AI in recruiting?
European jobseekers are being disadvantaged by AI bias in recruiting. How can a region that wants to champion human rights allow this?
By Nakeema Stefflbauer for Sifted on October 8, 2021
Why ‘debiasing’ will not solve racist AI
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”Racist Technology in Action: White preference in mortage-approval algorithms
A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”If AI is the problem, is debiasing the solution?
The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021
How Stereotyping and Bias Lingers in Product Design
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
We leven helaas nog steeds in een wereld waarin huidskleur een probleem is
Papa, mag ik die huidskleur?’ Verbaasd keek ik op van de kleurplaat die ik aan het inkleuren was, om mijn dochter te zien wijzen naar een stift met een perzikachtige kleur. Of misschien had die meer de kleur van een abrikoos. Afijn, de stift had in ieder geval niet háár huidskleur. Mijn dochter mag dan wel twee tinten lichter van kleur zijn dan ik, toch is zij overduidelijk bruin.
By Ilyaz Nasrulla for Trouw on September 23, 2021
Airbnb pricing algorithm led to increased racial disparities, study finds
Low uptake of ‘Smart Pricing’ feature among black hosts increased earnings gap.
By Dave Lee and Madhumita Murgia for Financial Times on May 13, 2021
Government: Stop using discriminatory algorithms
In her Volkskrant opinion piece Nani Jansen Reventlow makes a forceful argument for the government to stop using algorithms that lead to discrimination and exclusion. Reventlow, director of the Digital Freedom Fund, employs a myriad of examples to show how disregarding the social nature of technological systems can lead to reproducing existing social injustices such as racism or discrimination. The automatic fraud detection system SyRI that was ruled in violation of fundamental rights (and its dangerous successor Super SyRI) is discussed, as well as the racist proctoring software we wrote about earlier.
Continue reading “Government: Stop using discriminatory algorithms”Proof for Twitter’s bias toward lighter faces
We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.
Continue reading “Proof for Twitter’s bias toward lighter faces”Uber-racist: Racial discrimination in dynamic pricing algorithms
Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).
Continue reading “Uber-racist: Racial discrimination in dynamic pricing algorithms”Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”The Secret Bias Hidden in Mortgage-Approval Algorithms
Even accounting for factors lenders said would explain disparities, people of color are denied mortgages at significantly higher rates than White people.
By Emmanuel Martinez and Lauren Kirchner for The Markup on August 25, 2021
Twitter’s algorithmic bias bug bounty could be the way forward, if regulators step in
Twitter opened its image cropping algorithm and gave prizes to people who could find biases in it. While interesting in itself, the program mostly reveals the impotence of regulators.
By Nicolas Kayser-Bril for AlgorithmWatch on August 17, 2021
Researchers find racial discrimination in ‘dynamic pricing’ algorithms used by Uber, Lyft, and others
A preprint study shows ride-hailing services like Uber, Lyft, and Via charge higher prices in certain neighborhoods based on racial and other biases.
By Kyle Wiggers for VentureBeat on June 12, 2020
Adam Bomb on Twitter
Just tell me the reason isn’t what I think it is, @Uber
By Adam Bomb for Twitter on August 15, 2021
Student proves Twitter algorithm ‘bias’ toward lighter, slimmer, younger faces
Company pays $3,500 to Bogdan Kulynych who demonstrated flaw in image cropping software.
By Alex Hern for The Guardian on August 10, 2021
Onderzoek door Defcon-bezoekers bevestigt vooroordelen in algoritme van Twitter
Er zitten vooroordelen in een algoritme van Twitter, dat ontdekten onderzoekers tijdens een algorithmic bias bounty-competitie op Defcon. Zo worden onder meer foto’s van ouderen en mensen met een beperking weggefilterd in Twitters croptool.
By Stephan Vegelien for Tweakers on August 10, 2021
Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Opinie: Stop algoritmen van overheid die tot discriminatie en uitsluiting leiden
Uitvoeringsdiensten gebruiken talloze ‘zwarte lijsten’ met potentiële fraudeurs. Dat kan leiden tot (indirecte) etnische profilering en nieuwe drama’s, na de toeslagenaffaire.
By Nani Jansen Reventlow for Volkskrant on July 15, 2021
Why tech needs to focus on the needs of marginalized groups
Marginalized groups are often not represented in technology development. What we need is inclusive participation to centre on the concerns of these groups.
By Nani Jansen Reventlow for The World Economic Forum on July 8, 2021
Are We Automating Racism?
Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
From YouTube on March 31, 2021
Human-in-the-loop is not the magic bullet to fix AI harms
In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.
Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware
De UvA kan het niet meer maken om proctoring in te zetten bij het afnemen van tentamens, nu duidelijk is dat de surveillance-software juist op mensen van kleur een negatieve impact heeft.
Continue reading “Oproep aan de UvA: stop het gebruik van racistische proctoringsoftware”Apple’s emoji keyboard is reinforcing Western stereotypes
The feature associates “Africa” with the hut emoji and “China” with the dog emoji.
By Andrew Deck for Rest of World on June 15, 2021
Long overdue: Google has improved its camera app to work better for Black people
The following short video by Vox shows how white skin has always been the norm in photography. Black people didn’t start to look good on film until in the 1970s furniture makers complained to Kodak that their film didn’t render the difference between dark and light grained wood, and chocolate companies were upset that you couldn’t see the difference between dark and light chocolate.
Continue reading “Long overdue: Google has improved its camera app to work better for Black people”Demographic skews in training data create algorithmic errors
Women and people of colour are underrepresented and depicted with stereotypes.
From The Economist on June 5, 2021