During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.
Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”Student meldt discriminatie met antispieksoftware bij College Rechten van de Mens
Een student van de Vrije Universiteit Amsterdam (VU) dient een klacht in bij het College voor de Rechten van de Mens (pdf). Bij het gebruik van de antispieksoftware voor tentamens werd ze alleen herkend als ze met een lamp in haar gezicht scheen. De VU had volgens haar vooraf moeten controleren of studenten met een zwarte huidskleur even goed herkend zouden worden als witte studenten.
From NU.nl on July 15, 2022
Student stapt naar College voor de Rechten van de Mens vanwege gebruik racistische software door de VU
Student Robin Pocornie moest tijdens de coronapandemie tentamens maken met een lamp direct op haar gezicht. Haar witte medestudenten hoefden dat niet. De surveillance-software van de VU heeft haar gediscrimineerd, daarom dient ze vandaag een klacht in bij het College voor de Rechten van de Mens.
Continue reading “Student stapt naar College voor de Rechten van de Mens vanwege gebruik racistische software door de VU”Meta forced to change its advertisement algorithm to address algorithmic discrimination
In his New York Times article, Mike Isaac describes how Meta is implementing a new system to automatically check whether the housing, employment and credit ads it hosts are shown to people equally. This is a move following a 111,054 US dollar fine the US Justice Department has issued Meta because its ad systems have been shown to discriminate its users by, amongst other things, excluding black people from seeing certain housing ads in predominately white neighbourhoods. This is the outcome of a long process, which we have written about previously.
Continue reading “Meta forced to change its advertisement algorithm to address algorithmic discrimination”A guidebook on how to combat algorithmic discrimination
What is algorithmic discrimination, how is it caused and what can be done about it? These are the questions that are addressed in AlgorithmWatch’s newly published report Automated Decision-Making Systems and Discrimination.
Continue reading “A guidebook on how to combat algorithmic discrimination”Racist Technology in Action: Turning a Black person, White
An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.
Continue reading “Racist Technology in Action: Turning a Black person, White”Meta Agrees to Alter Ad Technology in Settlement With U.S.
The Justice Department had accused Meta’s housing advertising system of discriminating against Facebook users based on their race, gender, religion and other characteristics.
By Mike Isaac for The New York Times on June 21, 2022
DALL·E mini has a mysterious obsession with women in saris
The images represent a glitch in the system that even its creator can’t explain.
By Nilesh Christopher for Rest of World on June 22, 2022
How to combat algorithmic discrimination? A guidebook by AutoCheck
We are faced with automated decision-making systems almost every day and they might be discriminating, without us even knowing about it. A new guidebook helps to better recognize such cases and support those affected.
From AlgorithmWatch on June 21, 2022
Moslima
In de 2-delige podcast ‘Moslima’ gaan Cigdem Yuksel en Maartje Duin op zoek naar de oorsprong van het standaardbeeld van ‘de moslima’.
By Cigdem Yuksel and Maartje Duin for VPRO on May 15, 2022
Forget sentience… the worry is that AI copies human bias
The fuss about a bot’s ‘consciousness’ obscures far more troubling concerns.
By Kenan Malik for The Guardian on June 19, 2022
Shocking report by the Algemene Rekenkamer: state algorithms are a shitshow
The Algemene Rekenkamer (Netherlands Court of Audit) looked into nine different algorithms used by the Dutch state. It found that only three of them fulfilled the most basic of requirements.
Continue reading “Shocking report by the Algemene Rekenkamer: state algorithms are a shitshow”Racist Techology in Action: Beauty is in the eye of the AI
Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”
Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”Diverse algoritmes Rijk voldoen niet aan basisvereisten
Een verantwoorde inzet van algoritmes door uitvoeringsorganisaties van de rijksoverheid is mogelijk, maar in de praktijk niet altijd het geval. De Algemene Rekenkamer heeft bij 3 algoritmes vastgesteld dat deze voldoen aan alle basisvereisten. Bij 6 andere bestaan uiteenlopende risico’s: gebrekkige controle op prestaties of effecten, vooringenomenheid, datalek of ongeautoriseerde toegang.
From Algemene Rekenkamer on May 18, 2022
How AI reinforces racism in Brazil
Author Tarcízio Silva on how algorithmic racism exposes the myth of “racial democracy.”
By Alex González Ormerod and Tarcízio Silva for Rest of World on April 22, 2022
Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias
The development and use of AI and machine learning in healthcare is proliferating. A 2020 study has shown that chest X-ray datasets that are used to train diagnostic models are biased against certain racial, gender and socioeconomic groups.
Continue reading “Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias”The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy
A government leader in Argentina hailed the AI, which was fed invasive data about girls. The feminist pushback could inform the future of health tech.
By Alexa Hagerty, Diego Jemio and Florencia Aranda for WIRED on February 16, 2022
Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms
In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.
Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”De discriminatie die in data schuilt
De Eerste Kamer doet onderzoek naar de effectiviteit van wetgeving tegen discriminatie. Wij mochten afgelopen vrijdag de parlementsleden vertellen over discriminatie en algoritmen. Hieronder volgt de kern van ons verhaal.
By Nadia Benaissa for Bits of Freedom on February 8, 2022
Costly birthplace: discriminating insurance practice
Two residents in Rome with exactly the same driving history, car, age, profession, and number of years owning a driving license may be charged a different price when purchasing car insurance. Why? Because of their place of birth, according to a recent study.
By Francesco Boscarol for AlgorithmWatch on February 4, 2022
Chicago’s “Race-Neutral” Traffic Cameras Ticket Black and Latino Drivers the Most
A ProPublica analysis found that traffic cameras in Chicago disproportionately ticket Black and Latino motorists. But city officials plan to stick with them — and other cities may adopt them too.
By Emily Hopkins and Melissa Sanchez for ProPublica on January 11, 2022
Holding Facebook Accountable for Digital Redlining
Online ad-targeting practices often reflect and replicate existing disparities, effectively locking out marginalized groups from housing, job, and credit opportunities.
By Linda Morris and Olga Akselrod for American Civil Liberties Union (ACLU) on January 27, 2022
Predictive policing constrains our possibilities for better futures
In the context of the use of crime predictive software in policing, Chris Gilliard reiterated in WIRED how data-driven policing systems and programs are fundamentally premised on the assumption that historical data about crimes determines the future.
Continue reading “Predictive policing constrains our possibilities for better futures”Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success
An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.
Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”Reference man
Ontmoet Reference man: een witte man, zo’n 1.75m lang en ongeveer 80 kilo. En onze wereld is afgestemd, getest en gebouwd op hem. Dat is soms bijna knullig, maar af en toe ook levensbedreigend. In dit vierluik neemt Sophie Frankenmolen de kijker mee in haar onderzoek naar dit bizarre fenomeen.
By Sophie Frankenmolen for NPO Start on January 13, 2022
Technologies of Black Freedoms: Calling On Black Studies Scholars, with SA Smythe
Refusing to see like a state.
By J. Khadijah Abdurahman and SA Smythe for Logic on December 25, 2022
The Humanities Can’t Save Big Tech From Itself
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
Predictive policing reinforces and accelerates racial bias
The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.
Continue reading “Predictive policing reinforces and accelerates racial bias”Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing
The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), has fined the Dutch Tax Agency 2.75 milion euros for discriminatory data processing as part of the child benefits scandal.
Continue reading “Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing”Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them
Millions of crime predictions left on an unsecured server show PredPol mostly avoided Whiter neighborhoods, targeted Black and Latino neighborhoods.
By Aaron Sankin, Annie Gilbertson, Dhruv Mehrotra and Surya Mattu for The Markup on December 2, 2021
Shirley Cards
Photographer Ibarionex Perello recalls how school picture day would go back in the 1970s at the Catholic school he attended in South Los Angeles. He recalls that kids would file into the school auditorium in matching uniforms. They’d sit on a stool, the photographer would snap a couple images, and that would be it. But when the pictures came back weeks later, Perello always noticed that the kids with lighter skin tones looked better — or at least more like themselves. Those with darker skin tones looked to be hidden in shadows. That experience stuck with him, but he didn’t realize why this was happening until later in his life.
From 99% Invisible on November 8, 2021
Discriminating Data
How big data and machine learning encode discrimination and create agitated clusters of comforting rage.
By Wendy Hui Kyong Chun for The MIT Press on November 1, 2021
Amnesty’s grim warning against another ‘Toeslagenaffaire’
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”Racist Technology in Action: Facebook labels black men as ‘primates’
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.
Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal
Social security enforcement agencies worldwide are increasingly automating their processes in the hope of detecting fraud. The Netherlands is at the forefront of this development. The Dutch tax authorities adopted an algorithmic decision-making system to create risk profiles of individuals applying for childcare benefits in order to detect inaccurate and potentially fraudulent applications at an early stage. Nationality was one of the risk factors used by the tax authorities to assess the risk of inaccuracy and/or fraud in the applications submitted. This report illustrates how the use of individuals’ nationality resulted in discrimination as well as racial profiling.
From Amnesty International on October 25, 2021
Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’
Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.
By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019
Dr Nakeema Stefflbauer: ‘#defundbias in online hiring and listen to the people in Europe whom AI algorithms harm’
The first time I applied to work at a European company, my interviewer verbally grilled me about my ethnic origin. “Is your family from Egypt? Morocco? Are you Muslim?” asked a white Belgian man looking for a project manager. He was the CEO. My CV at the time was US-style, without a photograph, but with descriptions of research I had conducted at various Middle East and North African universities. I’d listed my nationality and my BA, MA, and PhD degrees, which confirmed my Ivy League graduate status several times over. “Are either of your parents Middle Eastern?” the CEO persisted.
By Nakeema Stefflbauer for Who Writes The Rules on August 23, 2021
Europe wants to champion human rights. So why doesn’t it police biased AI in recruiting?
European jobseekers are being disadvantaged by AI bias in recruiting. How can a region that wants to champion human rights allow this?
By Nakeema Stefflbauer for Sifted on October 8, 2021
Why ‘debiasing’ will not solve racist AI
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”Racist Technology in Action: White preference in mortage-approval algorithms
A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”