The devastating consequences of risk based profiling by the Dutch police

Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”

Continue reading “The devastating consequences of risk based profiling by the Dutch police”

My sons were profiled by a racist predictive policing system — the AI Act must prohibit these systems

When I found out my sons were placed on lists called the ‘Top 600’ and the ‘Top 400’ by the local Amsterdam council, I thought I was finally getting help. The council says the purpose of these lists, created by predictive and profiling systems, is to identify and give young people who have been in contact with the police “extra attention from the council and organisations such as the police, local public health service and youth protection,” to prevent them from coming into contact with police again. This could not have been further from the truth.

By Diana Sardjoe for Medium on September 28, 2022

NoTechFor: Forced Assimilation

Following the terror attack in Denmark of 2015, the state amped upits data analytics capabilities for counter-terrorism within the police and their Danish Security and Intelligence Service (PET). Denmark, a country which hosts an established, normalised, and widely accepted public surveillance infrastructure – justified in service of public health and greater centralisation and coordination between government and municipalities in delivery of citizen services – also boasts an intelligence service with extraordinarily expansive surveillance capabilities, and the enjoyment of wide exemptions from data protection regulations.

From No Tech for Tyrants on July 13, 2020

The Dutch government wants to continue to spy on activists’ social media

Investigative journalism of the NRC brought to light that the Dutch NCTV (the National Coordinator for Counterterrorism and Security) uses fake social media accounts to track Dutch activists. The agency also targets activists working in the social justice or anti-discrimination space and tracks their work, sentiments and movements through their social media accounts. This is a clear example of how digital communication allows governments to intensify their surveillance and criminalisation of political opinions outside the mainstream.

Continue reading “The Dutch government wants to continue to spy on activists’ social media”

Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms

In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.

Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”

Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact

Traffic cameras that are used to automatically hand out speeding tickets don’t look at the colour of the person driving the speeding car. Yet, ProPublica has convincingly shown how cameras that don’t have a racial bias can still have a disparate racial impact.

Continue reading “Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact”

De discriminatie die in data schuilt

De Eerste Kamer doet onderzoek naar de effectiviteit van wetgeving tegen discriminatie. Wij mochten afgelopen vrijdag de parlementsleden vertellen over discriminatie en algoritmen. Hieronder volgt de kern van ons verhaal.

By Nadia Benaissa for Bits of Freedom on February 8, 2022

Predictive policing reinforces and accelerates racial bias

The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.

Continue reading “Predictive policing reinforces and accelerates racial bias”

Boete Belastingdienst voor discriminerende en onrechtmatige werkwijze

De Autoriteit Persoonsgegevens (AP) legt de Belastingdienst een boete op van 2,75 miljoen euro. Dit doet de AP omdat de Belastingdienst jarenlang de (dubbele) nationaliteit van aanvragers van kinderopvangtoeslag op onrechtmatige, discriminerende en daarmee onbehoorlijke wijze heeft verwerkt. Dit zijn ernstige overtredingen van de privacywet, de Algemene verordening gegevensbescherming (AVG).

From Autoriteit Persoonsgegevens on December 7, 2021

Massive Predpol leak confirms that it drives racist policing

When you or I seek out evidence to back up our existing beliefs and ignore the evidence that shows we’re wrong, it’s called “confirmation bias.” It’s a well-understood phenomenon that none of us are immune to, and thoughtful people put a lot of effort into countering it in themselves.

By Cory Doctorow for Pluralistic on December 2, 2021

Amnesty’s grim warning against another ‘Toeslagenaffaire’

In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.

Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”

Crowd-Sourced Suspicion Apps Are Out of Control

Technology rarely invents new societal problems. Instead, it digitizes them, supersizes them, and allows them to balloon and duplicate at the speed of light. That’s exactly the problem we’ve seen with location-based, crowd-sourced “public safety” apps like Citizen.

By Matthew Guariglia for Electronic Frontier Foundation (EFF) on October 21, 2021

Racist Technology in Action: Predicting future criminals with a bias against Black people

In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.

Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”

Racist and classist predictive policing exists in Europe too

The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.

Continue reading “Racist and classist predictive policing exists in Europe too”

Why EU needs to be wary that AI will increase racial profiling

This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.

By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021

Rotterdam’s use of algorithms could lead to ethnic profiling

The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.

Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”

Racist Technology in Action: Amazon’s racist facial ‘Rekognition’

An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.

Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”

This is the EU’s chance to stop racism in artificial intelligence

As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.

By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑