Starting November 27th, 2023.
Continue reading “Events, exhibits and other things to do”Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination
On October 17th, the Netherlands Institute for Human Rights ruled that the VU did not discriminate against bioinformatics student Robin Pocornie on the basis of race by using anti-cheating software. However, according to the institute, the VU has discriminated on the grounds of race in how they handled her complaint.
Continue reading “Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination”Events, exhibits and other things to do
Starting October 28th, 2023.
Continue reading “Events, exhibits and other things to do”Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score
Respondus, a vendor of online proctoring software, has been granted a patent for their “systems and methods for assessing data collected by automated proctoring.” The patent shows that their example method for calculating a risk score is adjusted on the basis of people’s skin colour.
Continue reading “Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score”Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?
In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.
Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”Events, exhibits and other things to do
Starting September 30th, 2023.
Continue reading “Events, exhibits and other things to do”Another false facial recognition match: pregnant woman wrongfully arrested
The police in America is using facial recognition software to match security footage of crimes to people. Kashmir Hill describes for the New York Times another example of a wrong match leading to a wrongful arrest.
Continue reading “Another false facial recognition match: pregnant woman wrongfully arrested”Dutch police used algorithm to predict violent behaviour without any safeguards
For many years the Dutch police has used a risk modeling algorithm to predict the chance that an individual suspect will commit a violent crime. Follow the Money exposed the total lack of a moral, legal, and statistical justification for its use, and now the police has stopped using the system.
Continue reading “Dutch police used algorithm to predict violent behaviour without any safeguards”Events, exhibits and other things to do
Starting September 2nd, 2023.
Continue reading “Events, exhibits and other things to do”Current state of research: Face detection still has problems with darker faces
Scientific research on the quality of face detection systems keeps finding the same result: no matter how, when, and with which system testing is done, every time it is found that faces of people with a darker skin tone are not detected as well as the faces of people with a lighter skin tone.
Continue reading “Current state of research: Face detection still has problems with darker faces”Racist Technology in Action: How Pokéman Go inherited existing racial inequities
When Aura Bogado was playing Pokémon Go in a much Whiter neighbourhood than the one where she lived, she noticed how many more PokéStops were suddenly available. She then crowdsourced locations of these stops and found out, with the Urban Institute think tank, that there were on average 55 PokéStops in majority White neighbourhoods and 19 in neighbourhoods that were majority Black.
Continue reading “Racist Technology in Action: How Pokéman Go inherited existing racial inequities”Events, exhibits and other things to do
Starting August 5th, 2023.
Continue reading “Events, exhibits and other things to do”Algorithm to help find fraudulent students turns out to be racist
DUO is the Dutch organisation for administering student grants. It uses an algorithm to help them decide which students get a home visit to check for fraudulent behaviour. Turns out they basically only check students of colour, and they have no clue why.
Continue reading “Algorithm to help find fraudulent students turns out to be racist”Events, exhibits and other things to do
Starting July 8th, 2023.
Continue reading “Events, exhibits and other things to do”Representing skin tone, or Google’s hubris versus the simplicity of Crayola
Google wants to “help computers ‘see’ our world”, and one of their ways of battling how current AI and machine learning systems perpetuate biases is to introduce a more inclusive scale of skin tone, the ‘Monk Skin Tone Scale’.
Continue reading “Representing skin tone, or Google’s hubris versus the simplicity of Crayola”Events, exhibits and other things to do
Starting June 10th, 2023.
Continue reading “Events, exhibits and other things to do”Doing an exam as if “driving at night with a car approaching from the other direction with its headlights on full-beam”
Robin Pocornie’s complaint against the VU for their use of Proctorio, which had trouble detecting her face as a person of colour, is part of larger and international story as an article in Wired shows.
Continue reading “Doing an exam as if “driving at night with a car approaching from the other direction with its headlights on full-beam””Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied
Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.
Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”Events, exhibits and other things to do
Starting May 13th, 2023.
Continue reading “Events, exhibits and other things to do”How AIs collapse our history and culture into a monolithic perspective
In this piece on Medium, Jenka Gurfinkel writes about a Reddit user who has asked Midjourney, a generative AI to do the following:
Continue reading “How AIs collapse our history and culture into a monolithic perspective”Imagine a time traveler journeyed to various times and places throughout human history and showed soldiers and warriors of the periods what a “selfie” is.
Events, exhibits and other things to do
Starting April 15th, 2023.
Continue reading “Events, exhibits and other things to do”Work related to the Racism and Technology Center is getting media attention
The current wave of reporting on the AI-bubble has one advantage: it also creates a bit of space in the media to write about how AI reflects the existing inequities in our society.
Continue reading “Work related to the Racism and Technology Center is getting media attention”Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased
The algorithm that the city of Rotterdam used to predict the risk of welfare fraud fell into the hands of journalists. Turns out that the system was biased against marginalised groups like young mothers and people who don’t have Dutch as their first language.
Continue reading “Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased”First Dutch citizen proves that an algorithm discriminated against her on the basis of her skin colour
Robin Pocornie was featured in the Dutch current affairs programme EenVandaag. Professor Sennay Ghebreab and former Member of Parliament Kees Verhoeven provided expertise and commentary.
Continue reading “First Dutch citizen proves that an algorithm discriminated against her on the basis of her skin colour”Quantifying bias in society with ChatGTP-like tools
ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.
Continue reading “Quantifying bias in society with ChatGTP-like tools”Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament
In a roundtable on artificial intelligence in the Dutch Parliament, Quirine Eijkman spoke on behalf of the Netherlands Institute for Human Rights about Robin Pocornie’s case against the discriminatory use of Proctiorio at the VU university.
Continue reading “Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament”Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)
Dutch student Robin Pocornie filed a complaint with Dutch Institute for Human Rights. The surveillance software that her university used, had trouble recognising her as human being because of her skin colour. After a hearing, the Institute has now ruled that Robin has presented enough evidence to assume that she was indeed discriminated against. The ball is now in the court of the VU (her university) to prove that the software treated everybody the same.
Continue reading “Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)”Auto-detecting racist language in housing documents
DoNotPay is a ‘robot lawyer’ service, allowing its customers (regular citizens) to automatically do things like fighting parking tickets, getting refunds on flight tickets, or auto-cancelling their free trials. Earlier this year, it expanded its service to include finding and helping remove racist language in housing documents.
Continue reading “Auto-detecting racist language in housing documents”Racist Technology in Action: AI-generated image tools amplify harmful stereotypes
Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.
Continue reading “Racist Technology in Action: AI-generated image tools amplify harmful stereotypes”The devastating consequences of risk based profiling by the Dutch police
Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”
Continue reading “The devastating consequences of risk based profiling by the Dutch police”Beware of ‘Effective Altruism’ and ‘Longtermism’
‘Effective Altruism’ is all the vogue, but deeply problematic.
Continue reading “Beware of ‘Effective Altruism’ and ‘Longtermism’”Listen to Sennay Ghebreab for clarity about what AI should and shouldn’t do
Sennay Ghebreab, head of the Civic AI Lab which aims to develop AI in a socially inclusive manner, was interviewed by Kustaw Bessems for the Volkskrant podcast Stuurloos (in Dutch).
Continue reading “Listen to Sennay Ghebreab for clarity about what AI should and shouldn’t do”Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university
During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.
Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”A guidebook on how to combat algorithmic discrimination
What is algorithmic discrimination, how is it caused and what can be done about it? These are the questions that are addressed in AlgorithmWatch’s newly published report Automated Decision-Making Systems and Discrimination.
Continue reading “A guidebook on how to combat algorithmic discrimination”Shocking report by the Algemene Rekenkamer: state algorithms are a shitshow
The Algemene Rekenkamer (Netherlands Court of Audit) looked into nine different algorithms used by the Dutch state. It found that only three of them fulfilled the most basic of requirements.
Continue reading “Shocking report by the Algemene Rekenkamer: state algorithms are a shitshow”Don’t miss this 4-part journalism series on ‘AI Colonialism’
The MIT Technology Review has written a four-part series on how the impact of AI is “repeating the patterns of colonial history.” The Review is careful not to directly compare the current situation with the colonialist capturing of land, extraction of resources, and exploitation of people. Yet, they clearly show that AI does further enrich the wealthy at the tremendous expense of the poor.
Continue reading “Don’t miss this 4-part journalism series on ‘AI Colonialism’”Inventing language to avoid algorithmic censorship
Platforms like Tiktok, Twitch and Instagram use algorithmic filters to automatically block certain posts on the basis of the language they use. The Washington Post shows how this has created ‘algospeak’, a whole new vocabulary. So instead of ‘dead’ users write ‘unalive’, they use ‘SA’ instead of ‘sexual assault’, and write ‘spicy eggplant’ rather than ‘vibrator’.
Continue reading “Inventing language to avoid algorithmic censorship”72 civil society organisations to the EU: “Abolish tracking-based online advertising”
The Racism and Technology Center co-signed an open letter asking the EU member states to make sure that the upcoming Digital Services Act will abolish so-called ‘dark patterns’ and advertising that is based on tracking and harvesting personal data.
Continue reading “72 civil society organisations to the EU: “Abolish tracking-based online advertising””Facebook has finally stopped enabling racial profiling for targeted advertising
Around 2016 Facebook was still proud of its ability to target to “Black affinity” and “White affinity” adiences for the ads of their customers. I then wrote an op-ed decrying this form of racial profiling that was enabled by Facebook’s data lust.
Continue reading “Facebook has finally stopped enabling racial profiling for targeted advertising”Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact
Traffic cameras that are used to automatically hand out speeding tickets don’t look at the colour of the person driving the speeding car. Yet, ProPublica has convincingly shown how cameras that don’t have a racial bias can still have a disparate racial impact.
Continue reading “Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact”