Even though the Dutch tax office (the Belastingdienst) was advised to immediately stop the use of three risk profiling algorithms, the office decided to continue their use, according to this reporting by Follow the Money.
Continue reading “Dutch Tax Office keeps breaking the law with their risk profiling algorithms”Belastingdienst blijft wet overtreden met mogelijk discriminerende fraude-algoritmen
Na het toeslagenschandaal kreeg de Belastingdienst het advies om drie mogelijk discriminerende fraude-algoritmen onmiddellijk stop te zetten. Toch besloot de fiscus ermee door te gaan: het organisatiebelang woog zwaarder dan naleving van de wet en bescherming van grondrechten. Dat blijkt uit documenten die twee jaar nadat om openbaring was verzocht aan Follow the Money zijn vrijgegeven. ‘Onbegrijpelijk en verbijsterend.’
By David Davidson and Sebastiaan Brommersma for Follow the Money on December 14, 2023
Not a solution: Meta’s new AI system to contain discriminatory ads
Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.
By John Albert for AlgorithmWatch on November 17, 2023
AI is nog lang geen wondermiddel – zeker niet in het ziekenhuis
Tumoren ontdekken, nieuwe medicijnen ontwikkelen – beloftes genoeg over wat kunstmatige intelligentie kan betekenen voor de medische wereld. Maar voordat je zulk belangrijk werk kunt overlaten aan technologie, moet je precies snappen hoe die werkt. En zover zijn we nog lang niet.
By Maurits Martijn for De Correspondent on November 6, 2023
Instagram apologises for adding ‘terrorist’ to some Palestinian user profiles
Parent company Meta says bug caused ‘inappropriate’ auto-translations and was now fixed while employee says it pushed ‘a lot of people over the edge’.
By Josh Taylor for The Guardian on October 20, 2023
Facebook Report Concludes Company Censorship Violated Palestinian Human Rights
A report commission by Meta — Facebook and Instagram’s parent company — found bias against Palestinians during an Israeli assault last May.
By Sam Biddle for The Intercept on September 21, 2022
Equal love: Dating App Breeze seeks to address Algorithmic Discrimination
In a world where swiping left or right is the main route to love, whose profiles dating apps show you can change the course of your life.
Continue reading “Equal love: Dating App Breeze seeks to address Algorithmic Discrimination”Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?
In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.
Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands
This collaborative investigative effort by Spotlight Bureau, Lighthouse Reports and Follow the Money, dives into the story of a Moroccan-Dutch family in Veenendaal which was targeted for fraud by the Dutch government.
Continue reading “Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands”These new tools could make AI vision systems less biased
Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä for MIT Technology Review on September 25, 2023
Technologie raakt sommige groepen mensen in onze samenleving harder dan anderen (en dat zou niet zo mogen zijn)
Bij het gebruik van technologie worden onze maatschappelijke problemen gereflecteerd en soms verergerd. Die maatschappelijke problemen kennen een lange geschiedenis van oneerlijke machtsstructuren, racisme, seksisme en andere vormen van discriminatie. Wij zien het als onze taak om die oneerlijke structuren te herkennen en ons daartegen te verzetten.
By Evely Austin, Ilja Schurink and Nadia Benaissa for Bits of Freedom on September 12, 2023
Politie stopt met gewraakt algoritme dat ‘voorspelt’ wie in de toekomst geweld gebruikt
De politie stopt ‘per direct’ met het algoritme waarmee ze voorspelt of iemand in de toekomst geweld gaat gebruiken. Eerder deze week onthulde Follow the Money dat het zogeheten Risicotaxatie Instrument Geweld op ethisch en statistisch gebied ondermaats is.
By David Davidson for Follow the Money on August 25, 2023
Dubieus algoritme van de politie ‘voorspelt’ wie in de toekomst geweld zal plegen
De politie voorspelt al sinds 2015 met een algoritme wie er in de toekomst geweld zal plegen. Van Marokkaanse en Antilliaanse Nederlanders werd die kans vanwege hun achtergrond groter geschat. Dat gebeurt nu volgens de politie niet meer, maar daarmee zijn de gevaren van het model niet opgelost. ‘Aan dit algoritme zitten enorme risico’s.’
By David Davidson and Marc Schuilenburg for Follow the Money on August 23, 2023
Dutch police used algorithm to predict violent behaviour without any safeguards
For many years the Dutch police has used a risk modeling algorithm to predict the chance that an individual suspect will commit a violent crime. Follow the Money exposed the total lack of a moral, legal, and statistical justification for its use, and now the police has stopped using the system.
Continue reading “Dutch police used algorithm to predict violent behaviour without any safeguards”Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security
A system funded by the World Bank to assess who is most in need of support, is reported to not only be faulty but also discriminatory and depriving many of their right to social security. In a recent report titled “Automated Neglect: How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights” Human Rights Watch outlines how specifically the system used in Joran should be abandoned.
Continue reading “Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security”Women of colour are leading the charge against racist AI
In this Dutch-language piece for De Groene Amsterdammer, Marieke Rotman offers an accessible introduction of the main voices, both internationally and in the Netherlands, tirelessly fighting against racism and discrimination in AI-systems. Not coincidentally, most of the people doing this labour are women of colour. The piece guides you through their impressive work and leading perspectives on the dynamics of racism and technology.
Continue reading “Women of colour are leading the charge against racist AI”Racist Technology in Action: How Pokéman Go inherited existing racial inequities
When Aura Bogado was playing Pokémon Go in a much Whiter neighbourhood than the one where she lived, she noticed how many more PokéStops were suddenly available. She then crowdsourced locations of these stops and found out, with the Urban Institute think tank, that there were on average 55 PokéStops in majority White neighbourhoods and 19 in neighbourhoods that were majority Black.
Continue reading “Racist Technology in Action: How Pokéman Go inherited existing racial inequities”Algorithm to help find fraudulent students turns out to be racist
DUO is the Dutch organisation for administering student grants. It uses an algorithm to help them decide which students get a home visit to check for fraudulent behaviour. Turns out they basically only check students of colour, and they have no clue why.
Continue reading “Algorithm to help find fraudulent students turns out to be racist”World Bank / Jordan: Poverty Targeting Algorithms Harm Rights
An automated cash transfer program in Jordan developed with significant financing from the World Bank is undermined by errors, discriminatory policies, and stereotypes about poverty.
By Amos Toh for Human Rights Watch on June 13, 2023
An algorithm intended to reduce poverty might disqualify people in need
According to a new report by the Human Rights Watch, an algorithmic welfare distribution system funded by the World Bank unfairly and inaccurately quantifies poverty.
By Tate Ryan-Mosley for MIT Technology Review on June 13, 2023
De fraudejacht van Duo treft bijna alleen studenten met een migratieachtergrond
De jacht op vermeende fraudeurs door studiefinancieringverstrekker Duo treft bijna alleen studenten met een migratieachtergrond. Duo is zich van geen kwaad bewust en wil in september het aantal controles verviervoudigen.
By Anouk Kootstra, Bas Belleman and Belia Heilbron for De Groene Amsterdammer on June 21, 2023
On Race, AI, and Representation Or, Why Democracy Now Needs To Redo Its June 1 Segment
On June 1, Democracy Now featured a roundtable discussion hosted by Amy Goodman and Nermeen Shaikh, with three experts on Artificial Intelligence (AI), about their views on AI in the world. They included Yoshua Bengio, a computer scientist at the Université de Montréal, long considered a “godfather of AI,” Tawana Petty, an organiser and Director of Policy at the Algorithmic Justice League (AJL), and Max Tegmark, a physicist at the Massachusetts Institute of Technology. Recently, the Future of Life Institute, of which Tegmark is president, issued an open letter “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Bengio is a signatory on the letter (as is Elon Musk). The AJL has been around since 2016, and has (along with other organisations) been calling for a public interrogation of racialised surveillance technology, the use of police robots, and other ways in which AI can be directly responsible for bodily harm and even death.
By Yasmin Nair for Yasmin Nair on June 3, 2023
GPT detectors are biased against non-native English writers
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.
By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023
Opinie: ‘Niet alleen verdwijnende acceptgiro, maar ook slimme algoritmes vergroten digitale kloof’
Wie technologie enkel als vooruitgang ziet, vergeet een groep Nederlanders voor wie dat niet geldt. Dat zijn niet alleen mensen op leeftijd, zegt Aaron Mirck, maar ook slachtoffers van slimme algoritmes die de overheid gebruikt. Technologie is niet neutraal, betoogt hij.
By Aaron Mirck for Het Parool on May 27, 2023
Mean Images
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
Data & Society Announces the Launch of its Algorithmic Impact Methods Lab
Lab will advance assessments of AI systems in the public interest.
From Data & Society on May 10, 2023
Ethnic Profiling
Whistleblower reveals Netherlands’ use of secret and potentially illegal algorithm to score visa applicants.
By Ariadne Papagapitos, Carola Houtekamer, Crofton Black, Daniel Howden, Gabriel Geiger, Klaas van Dijken, Merijn Rengers and Nalinee Maleeyakul for Lighthouse Reports on April 24, 2023
Nederland worstelt niet met digitalisering, maar met discriminatie
Algoritmen: Steeds zijn het gemarginaliseerde groepen die vaker dan anderen worden geraakt door digitalisering, schrijven Evelyn Austin en Nadia Benaissa.
By Evely Austin and Nadia Benaissa for NRC on May 5, 2023
‘Pas op met deze visumaanvraag’, waarschuwt het algoritme dat discriminatie in de hand werkt. Het ministerie negeert kritiek
Visumbeleid: De papiermolen rond visumaanvragen detacheert Buitenlandse Zaken zo veel mogelijk naar buitenlandse bedrijven. Maar het risico op ongelijke behandeling door profilering van aanvragers blijft bestaan. Kritiek daarover van de interne privacy-toezichthouder, werd door het ministerie in de wind geslagen.
By Carola Houtekamer, Merijn Rengers and Nalinee Maleeyakul for NRC on April 23, 2023
Meta’s clampdown on Palestine speech is far from ‘unintentional’
A report validated Palestinian experiences of social media censorship in May 2021, but missed how those policies are biased by design.
By Marwa Fatafta for +972 Magazine on October 9, 2022
De zwarte doos van algoritmes: hoe discriminatie wordt geautomatiseerd
Evelyn dook deze week in de (allesbehalve mannelijke) wereld van glitch art, hebben we het over het algoritme dat jarenlang door de Gemeente Rotterdam is gebruikt om te voorspellen wie van de bijstandsgerechtigden zou kunnen knoeien met hun uitkering en bellen we in met podcast-kopstuk Lieven Heeremans.
By Evely Austin, Inge Wannet, Joran van Apeldoorn, Lieven Heeremans and Nadia Benaissa for Bits of Freedom on April 15, 2023
Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied
Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.
Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”How AIs collapse our history and culture into a monolithic perspective
In this piece on Medium, Jenka Gurfinkel writes about a Reddit user who has asked Midjourney, a generative AI to do the following:
Continue reading “How AIs collapse our history and culture into a monolithic perspective”Imagine a time traveler journeyed to various times and places throughout human history and showed soldiers and warriors of the periods what a “selfie” is.
More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’
In an interview with Zoë Corbyn in the Guardian, data journalist and Associate Professor of Journalism, Meredith Broussard discusses her new book More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.
Continue reading “More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’”Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.
In another investigation by The Markup, significant racial disparities were found in the assessment system used by the Los Angeles Homeless Services Authority (LAHSA), the body responsible for coordinating homelessness services in Los Angeles. This assessment system is reliant on a tool, called the Vulnerability Index-Service Prioritisation Decision Assistance Tool, or VI-SPDAT, to score and assess whether people can qualify for subsidised permanent housing.
Continue reading “Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.”‘Nieuw toeslagenschandaal’ ontstaat in de omgang van banken met moslims
Er moet onderzoek komen naar discriminatie van moslims door financiële instellingen, stelt de Nationaal Coördinator tegen Discriminatie en Racisme Rabin Baldewsingh. Hij waarschuwt voor een nieuw toeslagenschandaal.
By Rabin Baldewsingh and Somajeh Ghaeminia for Trouw on April 6, 2023
This Student Is Taking On ‘Biased’ Exam Software
Mandatory face-recognition tools have repeatedly failed to identify people with darker skin tones. One Dutch student is fighting to end their use.
By Morgan Meaker and Robin Pocornie for WIRED on April 5, 2023
AI expert Meredith Broussard: ‘Racism, sexism and ableism are systemic problems’
The journalist and academic says the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal.
By Meredith Broussard and Zoë Corbyn for The Guardian on March 26, 2023
Paneldiscussie over racisme in AI: ‘Kunstmatige intelligentie houdt ons een spiegel voor’
Hoe dragen algoritmen bij aan racisme? En wat zijn de gevolgen? Die vragen kwamen aan bod tijdens een paneldiscussie woensdagmiddag op Science Park. ‘We moeten een “safe space” creëren waarin bedrijven transparant durven te zijn zonder gelijk afgestraft te worden.’
By Sija van den Beukel for Folia on March 16, 2023
You Are Not a Parrot
You are not a parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.
By Elizabeth Weil and Emily M. Bender for New York Magazine on March 1, 2023