The development and use of AI and machine learning in healthcare is proliferating. A 2020 study has shown that chest X-ray datasets that are used to train diagnostic models are biased against certain racial, gender and socioeconomic groups.
Continue reading “Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias”The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy
A government leader in Argentina hailed the AI, which was fed invasive data about girls. The feminist pushback could inform the future of health tech.
By Alexa Hagerty, Diego Jemio and Florencia Aranda for WIRED on February 16, 2022
Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms
In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.
Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”De discriminatie die in data schuilt
De Eerste Kamer doet onderzoek naar de effectiviteit van wetgeving tegen discriminatie. Wij mochten afgelopen vrijdag de parlementsleden vertellen over discriminatie en algoritmen. Hieronder volgt de kern van ons verhaal.
By Nadia Benaissa for Bits of Freedom on February 8, 2022
Costly birthplace: discriminating insurance practice
Two residents in Rome with exactly the same driving history, car, age, profession, and number of years owning a driving license may be charged a different price when purchasing car insurance. Why? Because of their place of birth, according to a recent study.
By Francesco Boscarol for AlgorithmWatch on February 4, 2022
Chicago’s “Race-Neutral” Traffic Cameras Ticket Black and Latino Drivers the Most
A ProPublica analysis found that traffic cameras in Chicago disproportionately ticket Black and Latino motorists. But city officials plan to stick with them — and other cities may adopt them too.
By Emily Hopkins and Melissa Sanchez for ProPublica on January 11, 2022
Holding Facebook Accountable for Digital Redlining
Online ad-targeting practices often reflect and replicate existing disparities, effectively locking out marginalized groups from housing, job, and credit opportunities.
By Linda Morris and Olga Akselrod for American Civil Liberties Union (ACLU) on January 27, 2022
Predictive policing constrains our possibilities for better futures
In the context of the use of crime predictive software in policing, Chris Gilliard reiterated in WIRED how data-driven policing systems and programs are fundamentally premised on the assumption that historical data about crimes determines the future.
Continue reading “Predictive policing constrains our possibilities for better futures”Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success
An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.
Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”Reference man
Ontmoet Reference man: een witte man, zo’n 1.75m lang en ongeveer 80 kilo. En onze wereld is afgestemd, getest en gebouwd op hem. Dat is soms bijna knullig, maar af en toe ook levensbedreigend. In dit vierluik neemt Sophie Frankenmolen de kijker mee in haar onderzoek naar dit bizarre fenomeen.
By Sophie Frankenmolen for NPO Start on January 13, 2022
Technologies of Black Freedoms: Calling On Black Studies Scholars, with SA Smythe
Refusing to see like a state.
By J. Khadijah Abdurahman and SA Smythe for Logic on December 25, 2022
The Humanities Can’t Save Big Tech From Itself
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
Predictive policing reinforces and accelerates racial bias
The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.
Continue reading “Predictive policing reinforces and accelerates racial bias”Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing
The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), has fined the Dutch Tax Agency 2.75 milion euros for discriminatory data processing as part of the child benefits scandal.
Continue reading “Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing”Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them
Millions of crime predictions left on an unsecured server show PredPol mostly avoided Whiter neighborhoods, targeted Black and Latino neighborhoods.
By Aaron Sankin, Annie Gilbertson, Dhruv Mehrotra and Surya Mattu for The Markup on December 2, 2021
Shirley Cards
Photographer Ibarionex Perello recalls how school picture day would go back in the 1970s at the Catholic school he attended in South Los Angeles. He recalls that kids would file into the school auditorium in matching uniforms. They’d sit on a stool, the photographer would snap a couple images, and that would be it. But when the pictures came back weeks later, Perello always noticed that the kids with lighter skin tones looked better — or at least more like themselves. Those with darker skin tones looked to be hidden in shadows. That experience stuck with him, but he didn’t realize why this was happening until later in his life.
From 99% Invisible on November 8, 2021
Discriminating Data
How big data and machine learning encode discrimination and create agitated clusters of comforting rage.
By Wendy Hui Kyong Chun for The MIT Press on November 1, 2021
Amnesty’s grim warning against another ‘Toeslagenaffaire’
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”Racist Technology in Action: Facebook labels black men as ‘primates’
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.
Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal
Social security enforcement agencies worldwide are increasingly automating their processes in the hope of detecting fraud. The Netherlands is at the forefront of this development. The Dutch tax authorities adopted an algorithmic decision-making system to create risk profiles of individuals applying for childcare benefits in order to detect inaccurate and potentially fraudulent applications at an early stage. Nationality was one of the risk factors used by the tax authorities to assess the risk of inaccuracy and/or fraud in the applications submitted. This report illustrates how the use of individuals’ nationality resulted in discrimination as well as racial profiling.
From Amnesty International on October 25, 2021
Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’
Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.
By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019
Dr Nakeema Stefflbauer: ‘#defundbias in online hiring and listen to the people in Europe whom AI algorithms harm’
The first time I applied to work at a European company, my interviewer verbally grilled me about my ethnic origin. “Is your family from Egypt? Morocco? Are you Muslim?” asked a white Belgian man looking for a project manager. He was the CEO. My CV at the time was US-style, without a photograph, but with descriptions of research I had conducted at various Middle East and North African universities. I’d listed my nationality and my BA, MA, and PhD degrees, which confirmed my Ivy League graduate status several times over. “Are either of your parents Middle Eastern?” the CEO persisted.
By Nakeema Stefflbauer for Who Writes The Rules on August 23, 2021
Europe wants to champion human rights. So why doesn’t it police biased AI in recruiting?
European jobseekers are being disadvantaged by AI bias in recruiting. How can a region that wants to champion human rights allow this?
By Nakeema Stefflbauer for Sifted on October 8, 2021
Why ‘debiasing’ will not solve racist AI
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”Racist Technology in Action: White preference in mortage-approval algorithms
A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”If AI is the problem, is debiasing the solution?
The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021
How Stereotyping and Bias Lingers in Product Design
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
We leven helaas nog steeds in een wereld waarin huidskleur een probleem is
Papa, mag ik die huidskleur?’ Verbaasd keek ik op van de kleurplaat die ik aan het inkleuren was, om mijn dochter te zien wijzen naar een stift met een perzikachtige kleur. Of misschien had die meer de kleur van een abrikoos. Afijn, de stift had in ieder geval niet háár huidskleur. Mijn dochter mag dan wel twee tinten lichter van kleur zijn dan ik, toch is zij overduidelijk bruin.
By Ilyaz Nasrulla for Trouw on September 23, 2021
Airbnb pricing algorithm led to increased racial disparities, study finds
Low uptake of ‘Smart Pricing’ feature among black hosts increased earnings gap.
By Dave Lee and Madhumita Murgia for Financial Times on May 13, 2021
Government: Stop using discriminatory algorithms
In her Volkskrant opinion piece Nani Jansen Reventlow makes a forceful argument for the government to stop using algorithms that lead to discrimination and exclusion. Reventlow, director of the Digital Freedom Fund, employs a myriad of examples to show how disregarding the social nature of technological systems can lead to reproducing existing social injustices such as racism or discrimination. The automatic fraud detection system SyRI that was ruled in violation of fundamental rights (and its dangerous successor Super SyRI) is discussed, as well as the racist proctoring software we wrote about earlier.
Continue reading “Government: Stop using discriminatory algorithms”Proof for Twitter’s bias toward lighter faces
We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.
Continue reading “Proof for Twitter’s bias toward lighter faces”Uber-racist: Racial discrimination in dynamic pricing algorithms
Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).
Continue reading “Uber-racist: Racial discrimination in dynamic pricing algorithms”Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”The Secret Bias Hidden in Mortgage-Approval Algorithms
Even accounting for factors lenders said would explain disparities, people of color are denied mortgages at significantly higher rates than White people.
By Emmanuel Martinez and Lauren Kirchner for The Markup on August 25, 2021
Twitter’s algorithmic bias bug bounty could be the way forward, if regulators step in
Twitter opened its image cropping algorithm and gave prizes to people who could find biases in it. While interesting in itself, the program mostly reveals the impotence of regulators.
By Nicolas Kayser-Bril for AlgorithmWatch on August 17, 2021
Researchers find racial discrimination in ‘dynamic pricing’ algorithms used by Uber, Lyft, and others
A preprint study shows ride-hailing services like Uber, Lyft, and Via charge higher prices in certain neighborhoods based on racial and other biases.
By Kyle Wiggers for VentureBeat on June 12, 2020
Adam Bomb on Twitter
Just tell me the reason isn’t what I think it is, @Uber
By Adam Bomb for Twitter on August 15, 2021
Student proves Twitter algorithm ‘bias’ toward lighter, slimmer, younger faces
Company pays $3,500 to Bogdan Kulynych who demonstrated flaw in image cropping software.
By Alex Hern for The Guardian on August 10, 2021
Onderzoek door Defcon-bezoekers bevestigt vooroordelen in algoritme van Twitter
Er zitten vooroordelen in een algoritme van Twitter, dat ontdekten onderzoekers tijdens een algorithmic bias bounty-competitie op Defcon. Zo worden onder meer foto’s van ouderen en mensen met een beperking weggefilterd in Twitters croptool.
By Stephan Vegelien for Tweakers on August 10, 2021
Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”