‘Oximeters’ are small medical devices used to measure levels of oxygen in someone’s blood. The oximeter can be clipped over someones finger and uses specific frequences of light beamed through the skin to measure the saturation of oxygen in the blood.
Continue reading “Racist Technology in Action: Oxygen meters designed for white skin”Centering social injustice, de-centering tech
The Racism and Technology Center organised a panel titled Centering social injustice, de-centering tech: The case of the Dutch child benefits scandal and beyond at Privacy Camp 2022, a conference that brings together digital rights advocates, activists, academics and policymakers. Together with Merel Koning (Amnesty International), Nadia Benaissa (Bits of Freedom) and Sanne Stevens (Justice, Equity and Technology Table), the discussion used the Dutch child benefits scandal as an example to highlight issues of deeply rooted racism and discrimination in the public sector. The fixation on algorithms and automated decision-making systems tends to obscure these fundamental problems. Often, the use of technology by governments functions to normalise and rationalise existing racist and classist practices.
Continue reading “Centering social injustice, de-centering tech”Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms
In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.
Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”Facebook has finally stopped enabling racial profiling for targeted advertising
Around 2016 Facebook was still proud of its ability to target to “Black affinity” and “White affinity” adiences for the ads of their customers. I then wrote an op-ed decrying this form of racial profiling that was enabled by Facebook’s data lust.
Continue reading “Facebook has finally stopped enabling racial profiling for targeted advertising”Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact
Traffic cameras that are used to automatically hand out speeding tickets don’t look at the colour of the person driving the speeding car. Yet, ProPublica has convincingly shown how cameras that don’t have a racial bias can still have a disparate racial impact.
Continue reading “Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact”How our world is designed for the ‘reference man’ and why proctoring should be abolished
We belief that software used for monitoring students during online tests (so-called proctoring software) should be abolished because it discriminates against students with a darker skin colour.
Continue reading “How our world is designed for the ‘reference man’ and why proctoring should be abolished”Predictive policing constrains our possibilities for better futures
In the context of the use of crime predictive software in policing, Chris Gilliard reiterated in WIRED how data-driven policing systems and programs are fundamentally premised on the assumption that historical data about crimes determines the future.
Continue reading “Predictive policing constrains our possibilities for better futures”Nani Jansen Reventlow receives Dutch prize for championing privacy and digital rights
The Dutch digital rights NGO Bits of Freedom has awarded Nani Jansen Reventlow the “Felipe Rodriguez Award” for her outstanding work championing digital rights and her crucial efforts in decolonising the field. In this (Dutch language) podcast she is interviewed by Bits of Freedom’s Inge Wannet about her strategic litigation work and her ongoing fight to decolonise the digital rights field.
Continue reading “Nani Jansen Reventlow receives Dutch prize for championing privacy and digital rights”Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success
An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.
Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”Predictive policing reinforces and accelerates racial bias
The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.
Continue reading “Predictive policing reinforces and accelerates racial bias”Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing
The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), has fined the Dutch Tax Agency 2.75 milion euros for discriminatory data processing as part of the child benefits scandal.
Continue reading “Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing”Two new technology initiatives focused on (racial) justice
We are happy to see that more and more attention is being paid to how technology intersects with problems around (racial) justice. Recently two new initiatives have launched that we would like to highlight.
Continue reading “Two new technology initiatives focused on (racial) justice”Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers
This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.
Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”‘Race-blind’ content moderation disadvantages Black users
Over the past months a slew of leaks from the Facebook whistleblower, Frances Haugen, has exposed how the company was aware of the disparate and harmful impact of its content moderation practices. Most damning is that in the majority of instances, Facebook failed to address these harms. In this Washington Post piece, one of the latest of such revelations is discussed in detail: Even though Facebook knew it would come at the expense of Black users, its algorithm to detect and remove hate speech was programmed to be ‘race-blind’.
Continue reading “‘Race-blind’ content moderation disadvantages Black users”Dutch Scientific Council knows: AI is neither neutral nor always rational
AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.
Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”Intentional or otherwise, surveillance systems serve existing power structures
In Wired, Chris Gilliard strings together an incisive account of the racist history of surveillance: from the invention of home security system to modern day surveillance devices and technologies, such as Amazon and Google’s suite of security products.
Continue reading “Intentional or otherwise, surveillance systems serve existing power structures”Racist Technology in Action: an AI for ethical advice turns out to be super racist
In mid October 2021, the Allen Institute for AI launched Delphi, an AI in the form of a research prototype that is designed “to model people’s moral judgments on a variety of everyday situations.” In simple words: they made a machine that tries to do ethics.
Continue reading “Racist Technology in Action: an AI for ethical advice turns out to be super racist”Amnesty’s grim warning against another ‘Toeslagenaffaire’
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”Regulating big tech to make sure nobody is excluded
Our very own Naomi Appelman was interviewed for Atlas, a Dutch television show about science and current affairs. She talked about her research into what laws and regulations democracies should develop to ensure that large technology companies don’t unnecessarily exclude people.
Continue reading “Regulating big tech to make sure nobody is excluded”Digital Rights for All: harmed communities should be front and centre
Earlier this month, Digital Freedom Fund kicked off a series of online workshops of the ‘Digital Rights for All’ programme. In this post, Laurence Meyer details the reasons for this initiative with the fundamental aim of addressing why individuals and communities most affected by the harms of technologies are not centred in the advocacy, policy, and strategic litigation work on digital rights in Europe, and how to tackle challenges around funding, sustainable collaborations and language barriers.
Continue reading “Digital Rights for All: harmed communities should be front and centre”Racist Technology in Action: Facebook labels black men as ‘primates’
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.
Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”Why ‘debiasing’ will not solve racist AI
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”Big Tech is propped up by a globally exploited workforce
Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.
Continue reading “Big Tech is propped up by a globally exploited workforce”Photo filters are keeping colorism alive
Many people use filters on social media to ‘beautify’ their pictures. In this article, Tate Ryan-Mosley discusses how these beauty filters can perpetuate colorism. Colorism has a long and complicated history, but can be summarised as a preference for whiter skin as opposed to darker skin. Ryan-Mosley explains that “though related to racism, it’s distinct in that it can affect people regardless of their race, and can have different effects on people of the same background.” The harmful effects of colorism, ranging from discrimination to mental health issues or the use of toxic skin-lightening products, are found across races and cultures.
Continue reading “Photo filters are keeping colorism alive”Racist Technology in Action: White preference in mortage-approval algorithms
A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”Government: Stop using discriminatory algorithms
In her Volkskrant opinion piece Nani Jansen Reventlow makes a forceful argument for the government to stop using algorithms that lead to discrimination and exclusion. Reventlow, director of the Digital Freedom Fund, employs a myriad of examples to show how disregarding the social nature of technological systems can lead to reproducing existing social injustices such as racism or discrimination. The automatic fraud detection system SyRI that was ruled in violation of fundamental rights (and its dangerous successor Super SyRI) is discussed, as well as the racist proctoring software we wrote about earlier.
Continue reading “Government: Stop using discriminatory algorithms”Proof for Twitter’s bias toward lighter faces
We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.
Continue reading “Proof for Twitter’s bias toward lighter faces”Uber-racist: Racial discrimination in dynamic pricing algorithms
Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).
Continue reading “Uber-racist: Racial discrimination in dynamic pricing algorithms”Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”The use of racist technology is not inevitable, but a choice we make
Last month, we wrote a piece in Lilith Mag that builds on some of the examples we have previously highlighted – the Dutch childcare benefits scandal, the use of online proctoring software, and popular dating app Grindr – to underscore two central ideas.
Continue reading “The use of racist technology is not inevitable, but a choice we make”Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Racist Technology in Action: Apple’s emoji keyboard reinforces Western stereotypes
Time and time again, big tech companies have shown their ability and power to (mis)represent and (re)shape our digital world. From speech, to images, and most recently, to the emojis that we regularly use.
Continue reading “Racist Technology in Action: Apple’s emoji keyboard reinforces Western stereotypes”Technology can be racist and we should talk about that
The past year has been filled with examples of technologies being racist. Yet, how we can fight this is hardly part of societal debate in the Netherlands. This must change. Making these racist technologies visible is the first step towards acknowledging that technology can indeed be racist.
Continue reading “Technology can be racist and we should talk about that”Covid-19 data: making racialised inequality in the Netherlands invisible
The CBS, the Dutch national statistics authority, issued a report in March showing that someone’s social economic status is a clear risk factor for dying of Covid-19. In an insightful piece, researchers Linnet Taylor and Tineke Broer criticise this report and show that the way in which the CBS collects and aggragates data on Covid-19 cases and deaths obfuscates the full extent of racialised or ethnic inequality in the impact of the pandemic.
Continue reading “Covid-19 data: making racialised inequality in the Netherlands invisible”Tech companies poured 3.8 billion USD into racial justice, but to what avail?
The Plug and Fast Company looked at what happened to the 3.8 billion dollars that US-based tech companies committed to diversity, equity, and inclusion as their response to the Black Lives Matter protests.
Continue reading “Tech companies poured 3.8 billion USD into racial justice, but to what avail?”Human-in-the-loop is not the magic bullet to fix AI harms
In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.
Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”Call to the University of Amsterdam: Stop using racist proctoring software
The University of Amsterdam can no longer justify the use of proctoring software for remote examinations now that we know that it has a negative impact on people of colour.
Continue reading “Call to the University of Amsterdam: Stop using racist proctoring software”Long overdue: Google has improved its camera app to work better for Black people
The following short video by Vox shows how white skin has always been the norm in photography. Black people didn’t start to look good on film until in the 1970s furniture makers complained to Kodak that their film didn’t render the difference between dark and light grained wood, and chocolate companies were upset that you couldn’t see the difference between dark and light chocolate.
Continue reading “Long overdue: Google has improved its camera app to work better for Black people”