Respondus, a vendor of online proctoring software, has been granted a patent for their “systems and methods for assessing data collected by automated proctoring.” The patent shows that their example method for calculating a risk score is adjusted on the basis of people’s skin colour.
Continue reading “Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score”Intuit: “Our fraud fights racism”
Today’s key concept is “predatory inclusion”: “a process wherein lenders and financial actors offer needed services to Black households but on exploitative terms that limit or eliminate their long-term benefits”.
By Cory Doctorow for Pluralistic on September 27, 2023
Equal love: Dating App Breeze seeks to address Algorithmic Discrimination
In a world where swiping left or right is the main route to love, whose profiles dating apps show you can change the course of your life.
Continue reading “Equal love: Dating App Breeze seeks to address Algorithmic Discrimination”Use of machine translation tools exposes already vulnerable asylum seekers to even more risks
The use of and reliance on machine translation tools in asylum seeking procedures has become increasingly common amongst government contractors and organisations working with refugees and migrants. This Guardian article highlights many of the issues documented by Respond Crisis Translation, a network of people who provide urgent interpretation services for migrants and refugees. The problems with machine translation tools occur throughout the asylum process, from border stations to detention centers to immigration courts.
Continue reading “Use of machine translation tools exposes already vulnerable asylum seekers to even more risks”Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?
In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.
Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands
This collaborative investigative effort by Spotlight Bureau, Lighthouse Reports and Follow the Money, dives into the story of a Moroccan-Dutch family in Veenendaal which was targeted for fraud by the Dutch government.
Continue reading “Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands”Events, exhibits and other things to do
Starting September 30th, 2023.
Continue reading “Events, exhibits and other things to do”These new tools could make AI vision systems less biased
Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä for MIT Technology Review on September 25, 2023
Does AI perpetuate human bias?
AI bias is not new. Rather it is a problem that is escalating as newer AI technologies are being deployed in various parts of our lives. Who does AI discriminate against and why? Dutch student Robin Pocornie tells us why she submitted a claim against her university using an AI exam supervision system. New York-based data reporter Lam Thuy Vo points out insufficient and inadequate datasets that AI is trained on, while Berlin-based tech expert Nakeema Stefflbauer talks about systemic biases entrenched into AI design. When talking about AI chatbots like ChatGPT, Vanderbilt University’s Jules White argues that bias is rather brought out by users themselves. And with the co-founder of Racism and Technology Center in Amsterdam, Naomi Appelman, we discuss the idea of technological objectivity that persists in our society.
By Jules White, Lam Thuy Vo, Nakeema Stefflbauer, Naomi Appelman, Robin Pocornie and Samantha Johnson for YouTube on September 26, 2023
Data Work and its Layers of (In)visibility
No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.
By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023
Technologie raakt sommige groepen mensen in onze samenleving harder dan anderen (en dat zou niet zo mogen zijn)
Bij het gebruik van technologie worden onze maatschappelijke problemen gereflecteerd en soms verergerd. Die maatschappelijke problemen kennen een lange geschiedenis van oneerlijke machtsstructuren, racisme, seksisme en andere vormen van discriminatie. Wij zien het als onze taak om die oneerlijke structuren te herkennen en ons daartegen te verzetten.
By Evely Austin, Ilja Schurink and Nadia Benaissa for Bits of Freedom on September 12, 2023
Verdacht omdat je op een ‘verwonderadres’ woont: ‘Ze bleven aandringen dat ik moest opendoen’
Fraudejagers van de overheid die samenwerken onder de vlag van de Landelijke Stuurgroep Interventieteams selecteren overal in het land ‘verwonderadressen’ waar bewoners misschien wel frauderen. Uit een reconstructie blijkt hoe een familie in Veenendaal in beeld kwam en op drie adressen controleurs aan de deur kreeg. ‘We hoorden van de buren dat ze vanuit de bosjes ons huis in de gaten hielden.’
By David Davidson for Follow the Money on September 6, 2023
Politie stopt met gewraakt algoritme dat ‘voorspelt’ wie in de toekomst geweld gebruikt
De politie stopt ‘per direct’ met het algoritme waarmee ze voorspelt of iemand in de toekomst geweld gaat gebruiken. Eerder deze week onthulde Follow the Money dat het zogeheten Risicotaxatie Instrument Geweld op ethisch en statistisch gebied ondermaats is.
By David Davidson for Follow the Money on August 25, 2023
Dubieus algoritme van de politie ‘voorspelt’ wie in de toekomst geweld zal plegen
De politie voorspelt al sinds 2015 met een algoritme wie er in de toekomst geweld zal plegen. Van Marokkaanse en Antilliaanse Nederlanders werd die kans vanwege hun achtergrond groter geschat. Dat gebeurt nu volgens de politie niet meer, maar daarmee zijn de gevaren van het model niet opgelost. ‘Aan dit algoritme zitten enorme risico’s.’
By David Davidson and Marc Schuilenburg for Follow the Money on August 23, 2023
Another false facial recognition match: pregnant woman wrongfully arrested
The police in America is using facial recognition software to match security footage of crimes to people. Kashmir Hill describes for the New York Times another example of a wrong match leading to a wrongful arrest.
Continue reading “Another false facial recognition match: pregnant woman wrongfully arrested”Dutch police used algorithm to predict violent behaviour without any safeguards
For many years the Dutch police has used a risk modeling algorithm to predict the chance that an individual suspect will commit a violent crime. Follow the Money exposed the total lack of a moral, legal, and statistical justification for its use, and now the police has stopped using the system.
Continue reading “Dutch police used algorithm to predict violent behaviour without any safeguards”Filipino workers in “digital sweatshops” train AI models for the West
The Philippines is one of the countries that has more than two million people perform crowdwork, such as data annotation, according to informal government estimates.
Continue reading “Filipino workers in “digital sweatshops” train AI models for the West”Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security
A system funded by the World Bank to assess who is most in need of support, is reported to not only be faulty but also discriminatory and depriving many of their right to social security. In a recent report titled “Automated Neglect: How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights” Human Rights Watch outlines how specifically the system used in Joran should be abandoned.
Continue reading “Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security”Events, exhibits and other things to do
Starting September 2nd, 2023.
Continue reading “Events, exhibits and other things to do”Met het Oog op Morgen: Gezichtsherkenning herkent zwarte vrouw niet
Een opmerkelijke zaak in de Verenigde Staten: een vrouw wordt gearresteerd voor beroving en diefstal van een auto. Maar de vrouw is hoogzwanger en heeft het helemaal niet gedaan. Zij komt in beeld omdat een computer haar door gezichtsherkenning eruit pikt. Zij wordt in bijzijn van haar kinderen in de boeien geslagen. Later blijkt dus: zij was het niet.
By Naomi Appelman and Rob Trip for NPO Radio 1 on August 9, 2023
The Best Algorithms Still Struggle to Recognize Black Faces
US government tests find even top-performing facial recognition systems misidentify blacks at rates 5 to 10 times higher than they do whites.
By Tom Simonite for WIRED on July 22, 2019
Eight Months Pregnant and Arrested After False Facial Recognition Match
Porcha Woodruff thought the police who showed up at her door to arrest her for carjacking were joking. She is the first woman known to be wrongfully accused as a result of facial recognition technology.
By Kashmir Hill for The New York Times on August 6, 2023
Women of colour are leading the charge against racist AI
In this Dutch-language piece for De Groene Amsterdammer, Marieke Rotman offers an accessible introduction of the main voices, both internationally and in the Netherlands, tirelessly fighting against racism and discrimination in AI-systems. Not coincidentally, most of the people doing this labour are women of colour. The piece guides you through their impressive work and leading perspectives on the dynamics of racism and technology.
Continue reading “Women of colour are leading the charge against racist AI”Pokémon GO is changing how cities use public space, but could it be more inclusive?
Beyond the moral arguments for inclusion and equity, placemaking can help strengthen local economies, reduce crime, and drive civic engagement.
By Shiva Kooragayala and Tanaya Srini for Urban Institute on August 2, 2016
Is Pokémon Go racist? How the app may be redlining communities of color
PokéStops in communities of color suggest unconscious digital redlining within the game.
By Allana Akhtar for USA Today (EU) on August 9, 2016
I tried the AI linkedin/curriculum picture generator and this was the result.
The pictures i gave for reference are simple selfies of my face ONLY. But still, the AI oversexualized me due to my features that have been fetishized for centuries. AI is biased for POC. I’m horrified.
By Lana Denina for Twitter on July 15, 2023
An MIT student asked AI to make her headshot more ‘professional.’ It gave her lighter skin and blue eyes.
Rona Wang, who is Asian American, said the AI gave her “features that made me look Caucasian.”
By Rona Wang and Spencer Buell for The Boston Globe on July 19, 2023
Why Black Twitter will never die
Black Twitter is vital as a space for Black folk to create, maintain, and discuss the Black everyday in a way that reaffirms connection, and often joy.
From MSNBC News on July 17, 2023
Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures
Black artists have been tinkering with machine learning algorithms in their artistic projects, surfacing many questions about the troubling relationship between AI and race, as reported in the New York Times.
Continue reading “Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures”Current state of research: Face detection still has problems with darker faces
Scientific research on the quality of face detection systems keeps finding the same result: no matter how, when, and with which system testing is done, every time it is found that faces of people with a darker skin tone are not detected as well as the faces of people with a lighter skin tone.
Continue reading “Current state of research: Face detection still has problems with darker faces”Racist Technology in Action: How Pokéman Go inherited existing racial inequities
When Aura Bogado was playing Pokémon Go in a much Whiter neighbourhood than the one where she lived, she noticed how many more PokéStops were suddenly available. She then crowdsourced locations of these stops and found out, with the Urban Institute think tank, that there were on average 55 PokéStops in majority White neighbourhoods and 19 in neighbourhoods that were majority Black.
Continue reading “Racist Technology in Action: How Pokéman Go inherited existing racial inequities”Events, exhibits and other things to do
Starting August 5th, 2023.
Continue reading “Events, exhibits and other things to do”Vooral vrouwen van kleur klagen de vooroordelen van AI aan
Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.
By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023
Civil society calls on EU to protect people’s rights in the AI Act ‘trilogue’ negotiations
As EU institutions start decisive meetings on the Artificial Intelligence (AI) Act, a broad civil society coalition is urging them to prioritise people and fundamental rights.
From European Digital Rights (EDRi) on July 12, 2023
Algorithm to help find fraudulent students turns out to be racist
DUO is the Dutch organisation for administering student grants. It uses an algorithm to help them decide which students get a home visit to check for fraudulent behaviour. Turns out they basically only check students of colour, and they have no clue why.
Continue reading “Algorithm to help find fraudulent students turns out to be racist”France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?
Many governments are using mass surveillance to support law enforcement for the purposes of safety and security. In France, the French Parliament (and before, the French Senate) have approved the use of automated behavioural video surveillance at the 2024 Paris Olympics. Simply put, France wants to legalise mass surveillance at the national level which can violate many rights, such as the freedom of assembly and association, privacy, and non-discrimination.
Continue reading “France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?”Connecting the dots between early computing, labour history, and plantations
In this accessible longread, Meredith Whittaker takes us through complex and contested 19th century histories to connect the birth of modern computing to plantation technologies and industrial labour control.
Continue reading “Connecting the dots between early computing, labour history, and plantations”Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities
Bloomberg’s researchers used Stable Diffusion to gauge the magnitude of biases in generative AI. Through an analysis of more than 5,000 images created by Stable Diffusion, they have found that it takes racial and gender disparities to extremes. The results are worse than those found in the real world.
Continue reading “Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities”Events, exhibits and other things to do
Starting July 8th, 2023.
Continue reading “Events, exhibits and other things to do”Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History
Tech companies acknowledge machine-learning algorithms can perpetuate discrimination and need improvement.
By Zachary Small for The New York Times on July 4, 2023
