In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.
Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact
Traffic cameras that are used to automatically hand out speeding tickets don’t look at the colour of the person driving the speeding car. Yet, ProPublica has convincingly shown how cameras that don’t have a racial bias can still have a disparate racial impact.
Continue reading “Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact”De discriminatie die in data schuilt
De Eerste Kamer doet onderzoek naar de effectiviteit van wetgeving tegen discriminatie. Wij mochten afgelopen vrijdag de parlementsleden vertellen over discriminatie en algoritmen. Hieronder volgt de kern van ons verhaal.
By Nadia Benaissa for Bits of Freedom on February 8, 2022
Predictive policing constrains our possibilities for better futures
In the context of the use of crime predictive software in policing, Chris Gilliard reiterated in WIRED how data-driven policing systems and programs are fundamentally premised on the assumption that historical data about crimes determines the future.
Continue reading “Predictive policing constrains our possibilities for better futures”Crime Prediction Keeps Society Stuck in the Past
So long as algorithms are trained on racist historical data and outdated values, there will be no opportunities for change.
By Chris Gilliard for WIRED on January 2, 2022
Technologies of Black Freedoms: Calling On Black Studies Scholars, with SA Smythe
Refusing to see like a state.
By J. Khadijah Abdurahman and SA Smythe for Logic on December 25, 2022
The Humanities Can’t Save Big Tech From Itself
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
Predictive policing reinforces and accelerates racial bias
The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.
Continue reading “Predictive policing reinforces and accelerates racial bias”Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing
The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), has fined the Dutch Tax Agency 2.75 milion euros for discriminatory data processing as part of the child benefits scandal.
Continue reading “Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing”Boete Belastingdienst voor discriminerende en onrechtmatige werkwijze
De Autoriteit Persoonsgegevens (AP) legt de Belastingdienst een boete op van 2,75 miljoen euro. Dit doet de AP omdat de Belastingdienst jarenlang de (dubbele) nationaliteit van aanvragers van kinderopvangtoeslag op onrechtmatige, discriminerende en daarmee onbehoorlijke wijze heeft verwerkt. Dit zijn ernstige overtredingen van de privacywet, de Algemene verordening gegevensbescherming (AVG).
From Autoriteit Persoonsgegevens on December 7, 2021
Politie koppelde onschuldige asielzoekers aan strafrechtelijke informatie
De politie vergeleek telefoongegevens van asielzoekers met strafrechtelijke informatie. Dat „rijmde” niet met de privacywet, aldus de politie zelf.
By Martin Kuiper and Romy van der Poel for NRC on December 7, 2021
Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them
Millions of crime predictions left on an unsecured server show PredPol mostly avoided Whiter neighborhoods, targeted Black and Latino neighborhoods.
By Aaron Sankin, Annie Gilbertson, Dhruv Mehrotra and Surya Mattu for The Markup on December 2, 2021
Massive Predpol leak confirms that it drives racist policing
When you or I seek out evidence to back up our existing beliefs and ignore the evidence that shows we’re wrong, it’s called “confirmation bias.” It’s a well-understood phenomenon that none of us are immune to, and thoughtful people put a lot of effort into countering it in themselves.
By Cory Doctorow for Pluralistic on December 2, 2021
A Black Woman Invented Home Security. Why Did It Go So Wrong?
Surveillance systems, no matter the intention, will always exist to serve power.
By Chris Gilliard for WIRED on November 14, 2021
Revealed: the software that studies your Facebook friends to predict who may commit a crime
Voyager, which pitches its tech to police, has suggested indicators such as Instagram usernames that show Arab pride can signal inclination towards extremism.
By Johana Bhuiyan and Sam Levin for The Guardian on November 17, 2021
Amnesty’s grim warning against another ‘Toeslagenaffaire’
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”Crowd-Sourced Suspicion Apps Are Out of Control
Technology rarely invents new societal problems. Instead, it digitizes them, supersizes them, and allows them to balloon and duplicate at the speed of light. That’s exactly the problem we’ve seen with location-based, crowd-sourced “public safety” apps like Citizen.
By Matthew Guariglia for Electronic Frontier Foundation (EFF) on October 21, 2021
A Detroit community college professor is fighting Silicon Valley’s surveillance machine. People are listening.
Chris Gilliard grew up with racist policing in Detroit. He sees a new form of oppression in the tech we use every day.
By Chris Gilliard and Will Oremus for Washington Post on September 17, 2021
How Stereotyping and Bias Lingers in Product Design
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
Racist Technology in Action: Predicting future criminals with a bias against Black people
In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”An automated policing program got this man shot twice
Chicago’s predictive policing program told a man he would be involved with a shooting, but it couldn’t determine which side of the gun he would be on. Instead, it made him the victim of a violent crime.
By Matt Stroud for The Verge on May 24, 2021
Racist and classist predictive policing exists in Europe too
The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.
Continue reading “Racist and classist predictive policing exists in Europe too”EU’s new AI law risks enabling Orwellian surveillance states
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
Why EU needs to be wary that AI will increase racial profiling
This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.
By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021
Rotterdam’s use of algorithms could lead to ethnic profiling
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”Racist Technology in Action: Amazon’s racist facial ‘Rekognition’
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.
Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”This is the EU’s chance to stop racism in artificial intelligence
As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
IBM is failing to increase diversity while successfully producing racist information technologies
Charlton McIlwain, author of the book Black Software, takes a good hard look at IBM in a longread for Logic magazine.
Continue reading “IBM is failing to increase diversity while successfully producing racist information technologies”What Happens When Our Faces Are Tracked Everywhere We Go?
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
Race, tech, and medicine: Remarks from Dr. Dorothy Roberts and Dr. Ruha Benjamin
Race, tech, and medicine: Remarks from Dr. Dorothy Roberts and Dr. Ruha Benjamin.
By Dorothy Roberts, Kim M Reynolds and Ruha Benjamin for Our Data Bodies Project on August 15, 2020
How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.
By Gabriel Geiger for VICE on March 1, 2021
The Fort Rodman Experiment
In 1965, IBM launched the most ambitious attempt ever to diversify a tech company. The industry still needs to learn the lessons of that failure.
By Charlton McIlwain for Logic on December 20, 2021
Technology has codified structural racism – will the EU tackle racist tech?
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
The Dutch government’s love affair with ethnic profiling
In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.
Continue reading “The Dutch government’s love affair with ethnic profiling”LAPD Sought Ring Home Security Video Related to Black Lives Matter Protests
Emails show that the LAPD repeatedly asked camera owners for footage during the demonstrations, raising First Amendment concerns.
By Sam Biddle for The Intercept on February 16, 2021
Decode the Default
Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.
From The Internet Health Report 2020 on January 1, 2021
How the LAPD and Palantir Use Data to Justify Racist Policing
In a new book, a sociologist who spent months embedded with the LAPD details how data-driven policing techwashes bias.
By Mara Hvistendahl for The Intercept on January 30, 2021
Racism and “Smart Borders”
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.
Continue reading “Racism and “Smart Borders””Hoe Nederland A.I. inzet voor etnisch profileren
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
How our data encodes systematic racism
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect.
By Deborah Raji for MIT Technology Review on December 10, 2020