Recent investigative reporting by Follow the Money has revealed significant concerns regarding the Netherlands’ extensive reliance on the immoral American data analytics firm Palantir Technologies for both police and military operations since 2010.
Continue reading “Dutch police and military secretly work with evil tech giant Palantir”New EU deportation law must be rejected
In March 2025, the European Commission presented a new proposal for a Return Regulation – more aptly named, the Deportation Regulation – to replace its current Return Directive.
Continue reading “New EU deportation law must be rejected”Expert rejects Met police claim that study backs bias-free live facial recognition use
Academic says sample size was too small to claim new sensitivity guidelines have removed racial, gender or age bias.
By Vikram Dodd for The Guardian on August 23, 2025
The Twilight Zone
Laila Lalami’s prescient new novel follows a woman imprisoned by the government for her dreams.
By Sue Halpern for The New York Review of Books on July 31, 2025
Playbook: A Pragmatic Guide To Confronting Police Technologies
This Playbook brings together facilitation tools we have developed and research we have done in cities across Europe.
From Justice, Equity and Technology Project on July 31, 2025
Navigeren in niemandsland
De Wetenschappelijke Adviesraad Politie (WARP) adviseert de korpschef van de politie over zeven urgente uitdagingen rondom digitalisering en AI in politiewerk.
From Wetenschappelijke Adviesraad Politie on June 5, 2025
Amnesty report (yet again) exposes racist AI in UK police forces
Amnesty International UK’s report Automated Racism (from last February, PDF), reveals that almost three-quarters of UK police forces use discriminatory predictive policing systems that perpetuate racial profiling. At least 33 deploy AI tools that predict crime locations and profile individuals as future criminals based on biased historical data, perpetuating and entrenching racism and inequality.
Continue reading “Amnesty report (yet again) exposes racist AI in UK police forces”White Collar Crime Risk Zones
A machine learning system that predicts where white collar crimes will occur throughout the US.
From White Collare Crime Risk Zones on March 1, 2017
US to revoke student visas over ‘pro-Hamas’ social media posts flagged by AI – report
State department launches AI-assisted reviews of accounts to look for what it perceives as Hamas supporters.
From The Guardian on March 7, 2025
In Spain, an algorithm used by police to ‘combat’ gender violence determines whether women live or die
Lobna Hemid. Stefany González Escarraman. Eva Jaular (and her 11-month-old baby). The lives of these three women and an infant, amongst many others, tragically ended due to gender-related killings in Spain. As reported in this article, they were all classified as “low” or “negligible” risk by VioGén, despite reporting abuse to the police. In the case of Lobna Hemid, after reporting her husband’s abuse to the police and being assessed as “low risk” by VioGén, the police provided her with minimal protection, and weeks later, her husband stabbed her to death.
Continue reading “In Spain, an algorithm used by police to ‘combat’ gender violence determines whether women live or die”Opinie: ‘Plan van minister Van Weel om online te surveilleren zal moslims onevenredig hard raken’
De Nederlandse politie en veiligheidsdiensten kennen een traditie van moslimdiscriminatie, schrijft Evelyn Austin, directeur van Bits of Freedom. Zij vreest dat moslims wederom de dupe zijn als politie meer bevoegdheden krijgt om online te surveilleren.
By Evelyn Austin for Het Parool on February 8, 2025
Why Stopping Algorithmic Inequality Requires Taking Race Into Account
Let us explain. With cats
By Aaron Sankin and Natasha Uzcátegui-Liggett for The Markup on July 18, 2024
How governments are using facial recognition to crack down on protesters
Mass protests used to offer a degree of safety in numbers. Facial recognition technology changes the equation.
By Darren Loucaides for Rest of World on March 27, 2024
Facial Recognition Led to Wrongful Arrests. So Detroit Is Making Changes.
The Detroit Police Department arrested three people after bad facial recognition matches, a national record. But it’s adopting new policies that even the A.C.L.U. endorses.
By Kashmir Hill for The New York Times on June 29, 2024
How and why algorithms discriminate
Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.
By Pie Sombetzki for AlgorithmWatch on June 26, 2024
These Wrongly Arrested Black Men Say a California Bill Would Let Police Misuse Face Recognition
Three men falsely arrested based on face recognition technology have joined the fight against a California bill that aims to place guardrails around police use of the technology. They say it will still allow abuses and misguided arrests.
By Khari Johnson for The Markup on June 12, 2024
Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite
For more than a year now, the Dutch Ministry of Foreign Affairs has ignored advice from its experts and continued its use of discriminatory risk profiling of visa applicants.
Continue reading “Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite”AI detection has no place in education
The ubiquitous availability of AI has made plagiarism detection software utterly useless, argues our Hans de Zwart in the Volkskrant.
Continue reading “AI detection has no place in education”Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people
Since 2021, thousands of Amazon and Google tech workers have been organising against Project Nimbus, Google and Amazon’s shared USD$1.2 billion contract with the Israeli government and military. Since then, there has been no response from management or executive. Their organising efforts have accelerated since 7 October 2023, with the ongoing genocide on Gaza and occupied Palestinian territories by the Israeli state.
Continue reading “Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people”Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders
In 2020, two NGOs finally forced the UK Home Office’s hand, compelling it to abandon its secretive and racist algorithm for sorting visitor visa applications. Foxglove and The Joint Council for the Welfare of Immigrants (JCWI) had been battling the algorithm for years, arguing that it is a form of institutionalized racism and calling it “speedy boarding for white people.”
Continue reading “Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders”Building Blocks of a Digital Caste Panopticon: Everyday Brahminical Policing in India
Tracing the history of Telangana’s police state and its Brahminical investments.
By Aditya Rawat, Mrinalini R, Nikita Sonavane, and Ramani Mohanakrishnan and Vikas Yadav for Logic on December 13, 2023
‘Vergeet de controlestaat, we leven in een controlemaatschappij’
Volgens bijzonder hoogleraar digitale surveillance Marc Schuilenburg hebben wij geen geheimen meer. Bij alles wat we doen kijkt er wel iets of iemand mee die onze gangen registreert. We weten het, maar doen er gewoon aan mee. Zo diep zit digitale surveillance in de haarvaten van onze samenleving: ‘We herkennen het vaak niet eens meer.’
By Marc Schuilenburg and Sebastiaan Brommersma for Follow the Money on February 4, 2024
On “The Palestine Laboratory”
A large part of Israel’s economy and global influence are dependent on its military-technology complex that not only fuels the ongoing genocide in Gaza but is also exported to facilitate oppression around the world. In this thorough 2023 book, journalist Anthony Loewenstein makes explicit how Israel’s military industrial complex profits exorbitantly from exporting technologies “battle-tested” on occupied Gaza and the West-Bank.
Continue reading “On “The Palestine Laboratory””Predictive Policing Software Terrible at Predicting Crimes
A software company sold a New Jersey police department an algorithm that was right less than 1 percent of the time.
By Aaron Sankin and Surya Mattu for WIRED on October 2, 2023
Standing in solidarity with the Palestinian people
We at the Racism and Technology Center stand in solidarity with the Palestinian people. We condemn the violence enacted against the innocent people in Palestine and Israel, and mourn alongside all who are dead, injured and still missing. Palestinian communities are being subjected to unlawful collective punishment in Gaza and the West Bank, including the ongoing bombings and the blockade of water, food and energy. We call for an end to the blockade and an immediate ceasefire.
Continue reading “Standing in solidarity with the Palestinian people”Use of machine translation tools exposes already vulnerable asylum seekers to even more risks
The use of and reliance on machine translation tools in asylum seeking procedures has become increasingly common amongst government contractors and organisations working with refugees and migrants. This Guardian article highlights many of the issues documented by Respond Crisis Translation, a network of people who provide urgent interpretation services for migrants and refugees. The problems with machine translation tools occur throughout the asylum process, from border stations to detention centers to immigration courts.
Continue reading “Use of machine translation tools exposes already vulnerable asylum seekers to even more risks”Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands
This collaborative investigative effort by Spotlight Bureau, Lighthouse Reports and Follow the Money, dives into the story of a Moroccan-Dutch family in Veenendaal which was targeted for fraud by the Dutch government.
Continue reading “Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands”Verdacht omdat je op een ‘verwonderadres’ woont: ‘Ze bleven aandringen dat ik moest opendoen’
Fraudejagers van de overheid die samenwerken onder de vlag van de Landelijke Stuurgroep Interventieteams selecteren overal in het land ‘verwonderadressen’ waar bewoners misschien wel frauderen. Uit een reconstructie blijkt hoe een familie in Veenendaal in beeld kwam en op drie adressen controleurs aan de deur kreeg. ‘We hoorden van de buren dat ze vanuit de bosjes ons huis in de gaten hielden.’
By David Davidson for Follow the Money on September 6, 2023
Politie stopt met gewraakt algoritme dat ‘voorspelt’ wie in de toekomst geweld gebruikt
De politie stopt ‘per direct’ met het algoritme waarmee ze voorspelt of iemand in de toekomst geweld gaat gebruiken. Eerder deze week onthulde Follow the Money dat het zogeheten Risicotaxatie Instrument Geweld op ethisch en statistisch gebied ondermaats is.
By David Davidson for Follow the Money on August 25, 2023
Dubieus algoritme van de politie ‘voorspelt’ wie in de toekomst geweld zal plegen
De politie voorspelt al sinds 2015 met een algoritme wie er in de toekomst geweld zal plegen. Van Marokkaanse en Antilliaanse Nederlanders werd die kans vanwege hun achtergrond groter geschat. Dat gebeurt nu volgens de politie niet meer, maar daarmee zijn de gevaren van het model niet opgelost. ‘Aan dit algoritme zitten enorme risico’s.’
By David Davidson and Marc Schuilenburg for Follow the Money on August 23, 2023
Another false facial recognition match: pregnant woman wrongfully arrested
The police in America is using facial recognition software to match security footage of crimes to people. Kashmir Hill describes for the New York Times another example of a wrong match leading to a wrongful arrest.
Continue reading “Another false facial recognition match: pregnant woman wrongfully arrested”Dutch police used algorithm to predict violent behaviour without any safeguards
For many years the Dutch police has used a risk modeling algorithm to predict the chance that an individual suspect will commit a violent crime. Follow the Money exposed the total lack of a moral, legal, and statistical justification for its use, and now the police has stopped using the system.
Continue reading “Dutch police used algorithm to predict violent behaviour without any safeguards”The Best Algorithms Still Struggle to Recognize Black Faces
US government tests find even top-performing facial recognition systems misidentify blacks at rates 5 to 10 times higher than they do whites.
By Tom Simonite for WIRED on July 22, 2019
Eight Months Pregnant and Arrested After False Facial Recognition Match
Porcha Woodruff thought the police who showed up at her door to arrest her for carjacking were joking. She is the first woman known to be wrongfully accused as a result of facial recognition technology.
By Kashmir Hill for The New York Times on August 6, 2023
France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?
Many governments are using mass surveillance to support law enforcement for the purposes of safety and security. In France, the French Parliament (and before, the French Senate) have approved the use of automated behavioural video surveillance at the 2024 Paris Olympics. Simply put, France wants to legalise mass surveillance at the national level which can violate many rights, such as the freedom of assembly and association, privacy, and non-discrimination.
Continue reading “France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?”On Race, AI, and Representation Or, Why Democracy Now Needs To Redo Its June 1 Segment
On June 1, Democracy Now featured a roundtable discussion hosted by Amy Goodman and Nermeen Shaikh, with three experts on Artificial Intelligence (AI), about their views on AI in the world. They included Yoshua Bengio, a computer scientist at the Université de Montréal, long considered a “godfather of AI,” Tawana Petty, an organiser and Director of Policy at the Algorithmic Justice League (AJL), and Max Tegmark, a physicist at the Massachusetts Institute of Technology. Recently, the Future of Life Institute, of which Tegmark is president, issued an open letter “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Bengio is a signatory on the letter (as is Elon Musk). The AJL has been around since 2016, and has (along with other organisations) been calling for a public interrogation of racialised surveillance technology, the use of police robots, and other ways in which AI can be directly responsible for bodily harm and even death.
By Yasmin Nair for Yasmin Nair on June 3, 2023
Chinese internet trolls are adopting American racism to taunt Black users
Chinese social media, like Xiaohongshu, Kuaishou, and Douyin, are full of hundreds of users with American cop profile photos with the aim of taunting Black users.
By Viola Zhou for Rest of World on May 23, 2023
‘Thousands of Dollars for Something I Didn’t Do’
Because of a bad facial recognition match and other hidden technology, Randal Reid spent nearly a week in jail, falsely accused of stealing purses in a state he said he had never even visited.
By Kashmir Hill and Ryan Mac for The New York Times on March 31, 2023
Racial Discrimination in Face Recognition Technology
The application of face recognition technology in the criminal justice system threatens to perpetuate racial inequality.
By Alex Najibi for Science in the News on October 24, 2020
Enough is Enough. Tell Congress to Ban Federal Use of Face Recognition
Cities and counties across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so. From Boston to San Francisco, Jackson, Mississippi to Minneapolis, elected officials and activists know that face surveillance gives police the power to track us wherever we go. It also disproportionately impacts people of color, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in first amendment protected activities. Even Amazon, known for operating one of the largest video surveillance networks in the history of the world, extended its moratorium on selling face recognition to police.
By Matthew Guariglia for Electronic Frontier Foundation (EFF) on April 4, 2023