As we wrote earlier,, tech companies are deeply complicit in the current genocide in Gaza as well as the broader oppression in the occupied Palestinian territories.
Continue reading “Tech workers face retaliation for Palestine solidarity”Dutch government’s toxic relation with using data to detect social welfare fraud
The latest episode in the twisted series titled ‘The Dutch government is wildly discriminatory, using citizen’s data to seek out social welfare fraud’ has just come out.
Continue reading “Dutch government’s toxic relation with using data to detect social welfare fraud”Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders
In 2020, two NGOs finally forced the UK Home Office’s hand, compelling it to abandon its secretive and racist algorithm for sorting visitor visa applications. Foxglove and The Joint Council for the Welfare of Immigrants (JCWI) had been battling the algorithm for years, arguing that it is a form of institutionalized racism and calling it “speedy boarding for white people.”
Continue reading “Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders”The child benefits scandal: no lessons learned
“It could happen again tomorrow” is one of the main devastating conclusions of the parlementary inquiry following the child benefits scandal.
Continue reading “The child benefits scandal: no lessons learned”On “The Palestine Laboratory”
A large part of Israel’s economy and global influence are dependent on its military-technology complex that not only fuels the ongoing genocide in Gaza but is also exported to facilitate oppression around the world. In this thorough 2023 book, journalist Anthony Loewenstein makes explicit how Israel’s military industrial complex profits exorbitantly from exporting technologies “battle-tested” on occupied Gaza and the West-Bank.
Continue reading “On “The Palestine Laboratory””Racist Technology in Action: Generative/ing AI Bias
By now we know that generative image AI reproduces and amplifies sexism, racism, and other social systems of oppression. The latest example is of AI-generated stickers in WhatsApp that systematically depict Palestinian men and boys with rifles and guns.
Continue reading “Racist Technology in Action: Generative/ing AI Bias”Equal love: Dating App Breeze seeks to address Algorithmic Discrimination
In a world where swiping left or right is the main route to love, whose profiles dating apps show you can change the course of your life.
Continue reading “Equal love: Dating App Breeze seeks to address Algorithmic Discrimination”Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security
A system funded by the World Bank to assess who is most in need of support, is reported to not only be faulty but also discriminatory and depriving many of their right to social security. In a recent report titled “Automated Neglect: How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights” Human Rights Watch outlines how specifically the system used in Joran should be abandoned.
Continue reading “Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security”Women of colour are leading the charge against racist AI
In this Dutch-language piece for De Groene Amsterdammer, Marieke Rotman offers an accessible introduction of the main voices, both internationally and in the Netherlands, tirelessly fighting against racism and discrimination in AI-systems. Not coincidentally, most of the people doing this labour are women of colour. The piece guides you through their impressive work and leading perspectives on the dynamics of racism and technology.
Continue reading “Women of colour are leading the charge against racist AI”Connecting the dots between early computing, labour history, and plantations
In this accessible longread, Meredith Whittaker takes us through complex and contested 19th century histories to connect the birth of modern computing to plantation technologies and industrial labour control.
Continue reading “Connecting the dots between early computing, labour history, and plantations”Computational memory and coloniality: a chain with 8 bits
In this short piece for Logic(s), Zainab Aliyu shares part of her artistic research. In only a few paragraphs she is able to craft a connection between Yoruba traditional divination and computation through an exploration of the concept of memory.
Continue reading “Computational memory and coloniality: a chain with 8 bits”Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people
If this title feels like a deja-vu it is because you most likely have, in fact, seen this before (perhaps even in our newsletter). It was back in 2015 that the controversy first arose when Google released image recognition software that kept mislabelling Black people as gorillas (read here and here).
Continue reading “Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people”Metaphors of AI: “Gunpowder of the 21st Century”
With the high pace development of AI systems, more and more people are trying to grapple with the potential impact of these systems on our societies and daily lives. One often utilized way to make sense of AI is through metaphors, that either help to clarify or horribly muddy the waters.
Continue reading “Metaphors of AI: “Gunpowder of the 21st Century””What problems are AI-systems even solving? “Apparently, too few people ask that question”
In this interview with Felienne Hermans, Professor Computer Science at the Vrije Universiteit Amsterdam, she discusses the sore lack of divesity in the white male-dominated world of programming, the importance of teaching people how to code and, the problematic uses of AI-systems.
Continue reading “What problems are AI-systems even solving? “Apparently, too few people ask that question””Stories of everyday life with AI in the global majority
This collection by the Data & Society Research Institute sheds an intimate and grounded light on what impact AI-systems can have. The guiding question that connects all of the 13 non-fiction pieces in Parables of AI in/from the Majority world: An Anthology is what stories can be told about a world in which solving societal issues is more and more dependent on AI-based and data-driven technologies? The book, edited by Rigoberto Lara Guzmán, Ranjit Singh and Patrick Davison, through narrating ordinary, everyday experiences in the majority world, slowly disentangles the global and unequally distributed impact of digital technologies.
Continue reading “Stories of everyday life with AI in the global majority”An alliance against military AI
The past week the Dutch goverment hosted and organised the military AI conference REAIM 2023. Together with eight other NGOs we signed an open letter, initated by Oumaima Hajri, that calls on the Dutch government to stop promoting narratives of “innovation” and “opportunities” but, rather, centre the very real and often disparate human impact.
Continue reading “An alliance against military AI”Uit Vrij Nederland: Krijgen we wat we verdienen?
Laat ik de vraag specificeren voor mijn vakgebied: zijn besluiten die door technologie worden genomen rechtvaardig? Verdien je de beslissing die uit de machine rolt?
Continue reading “Uit Vrij Nederland: Krijgen we wat we verdienen?”What’s at stake with losing (Black) Twitter and moving to (white) Mastodon?
The immanent demise of Twitter after Elon Musk’s takeover sparked an exodus of people leaving the platform, which is only expected to increase. The significant increase in hate speech, and general hostile atmosphere created by the erratic decrees by it’s owner (such as Trump’s reinstatement) made, in the New Yorker writer Jelani Cobb’s words, “remaining completely untenable”. This, often vocal, movement of people from the platform has sparked a debate on what people stand to loose and what the alternative is.
Continue reading “What’s at stake with losing (Black) Twitter and moving to (white) Mastodon?”Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course
Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”Report: How police surveillance tech reinforces abuses of power
The UK organisation No Tech for Tyrants (NT4T) has published an extensive report on the use of surveillance technologies by the police in the UK, US, Mexico, Brazil, Denmark and India, in collaboration with researchers and activists from these countries. The report, titled “Surveillance Tech Perpetuates Police Abuse of Power” examines the relation between policing and technology through in-depth case studies.
Continue reading “Report: How police surveillance tech reinforces abuses of power”Racist Technology in Action: How hiring tools can be sexist and racist
One of the classic examples of how AI systems can reinforce social injustice is Amazon’s A.I. hiring tool. In 2014, Amazon built an ´A.I. powered´ tool to assess resumes and recommend the top candidates that would go on to be interviewed. However, the tool turned out to be very biased, systematically preferring men over women.
Continue reading “Racist Technology in Action: How hiring tools can be sexist and racist”Meta forced to change its advertisement algorithm to address algorithmic discrimination
In his New York Times article, Mike Isaac describes how Meta is implementing a new system to automatically check whether the housing, employment and credit ads it hosts are shown to people equally. This is a move following a 111,054 US dollar fine the US Justice Department has issued Meta because its ad systems have been shown to discriminate its users by, amongst other things, excluding black people from seeing certain housing ads in predominately white neighbourhoods. This is the outcome of a long process, which we have written about previously.
Continue reading “Meta forced to change its advertisement algorithm to address algorithmic discrimination”‘Smart’ techologies to detect racist chants at Dutch football matches
The KNVB (Royal Dutch Football Association) is taking a tech approach at tackling racist fan behaviour during matches, an approach that stands a great risk of falling in the techno solutionism trap.
Continue reading “‘Smart’ techologies to detect racist chants at Dutch football matches”Racist Techology in Action: Beauty is in the eye of the AI
Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”
Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”The Dutch government wants to continue to spy on activists’ social media
Investigative journalism of the NRC brought to light that the Dutch NCTV (the National Coordinator for Counterterrorism and Security) uses fake social media accounts to track Dutch activists. The agency also targets activists working in the social justice or anti-discrimination space and tracks their work, sentiments and movements through their social media accounts. This is a clear example of how digital communication allows governments to intensify their surveillance and criminalisation of political opinions outside the mainstream.
Continue reading “The Dutch government wants to continue to spy on activists’ social media”Technology, Racism and Justice at Roma Day 2022
Our own Jill Toh recently presented at a symposium on the use of technology and how it intersects with racism in the context of housing and policing. She spoke on a panel organised in the contex of the World Roma Day 2022 titled Technolution: Yearned-for Hopes or Old Injustices?.
Continue reading “Technology, Racism and Justice at Roma Day 2022”Racism and technology in the Dutch municipal elections
Last week in the Netherlands all focus was on the municipal elections. Last Wednesday, the city councils were chosen that will govern for the next four years. The elections this year were mainly characterised by a historical low turnout and the traditional overall wins for local parties. However, the focus of the Racism and Technology Center is, of course, on whether the new municipal councils and governments will put issues on the intersection of social justice and technology on the agenda.
Continue reading “Racism and technology in the Dutch municipal elections”Racist Technology in Action: Oxygen meters designed for white skin
‘Oximeters’ are small medical devices used to measure levels of oxygen in someone’s blood. The oximeter can be clipped over someones finger and uses specific frequences of light beamed through the skin to measure the saturation of oxygen in the blood.
Continue reading “Racist Technology in Action: Oxygen meters designed for white skin”Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms
In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.
Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”Nani Jansen Reventlow receives Dutch prize for championing privacy and digital rights
The Dutch digital rights NGO Bits of Freedom has awarded Nani Jansen Reventlow the “Felipe Rodriguez Award” for her outstanding work championing digital rights and her crucial efforts in decolonising the field. In this (Dutch language) podcast she is interviewed by Bits of Freedom’s Inge Wannet about her strategic litigation work and her ongoing fight to decolonise the digital rights field.
Continue reading “Nani Jansen Reventlow receives Dutch prize for championing privacy and digital rights”Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing
The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), has fined the Dutch Tax Agency 2.75 milion euros for discriminatory data processing as part of the child benefits scandal.
Continue reading “Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing”Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers
This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.
Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”‘Race-blind’ content moderation disadvantages Black users
Over the past months a slew of leaks from the Facebook whistleblower, Frances Haugen, has exposed how the company was aware of the disparate and harmful impact of its content moderation practices. Most damning is that in the majority of instances, Facebook failed to address these harms. In this Washington Post piece, one of the latest of such revelations is discussed in detail: Even though Facebook knew it would come at the expense of Black users, its algorithm to detect and remove hate speech was programmed to be ‘race-blind’.
Continue reading “‘Race-blind’ content moderation disadvantages Black users”Amnesty’s grim warning against another ‘Toeslagenaffaire’
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”Photo filters are keeping colorism alive
Many people use filters on social media to ‘beautify’ their pictures. In this article, Tate Ryan-Mosley discusses how these beauty filters can perpetuate colorism. Colorism has a long and complicated history, but can be summarised as a preference for whiter skin as opposed to darker skin. Ryan-Mosley explains that “though related to racism, it’s distinct in that it can affect people regardless of their race, and can have different effects on people of the same background.” The harmful effects of colorism, ranging from discrimination to mental health issues or the use of toxic skin-lightening products, are found across races and cultures.
Continue reading “Photo filters are keeping colorism alive”Racist Technology in Action: White preference in mortage-approval algorithms
A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”Government: Stop using discriminatory algorithms
In her Volkskrant opinion piece Nani Jansen Reventlow makes a forceful argument for the government to stop using algorithms that lead to discrimination and exclusion. Reventlow, director of the Digital Freedom Fund, employs a myriad of examples to show how disregarding the social nature of technological systems can lead to reproducing existing social injustices such as racism or discrimination. The automatic fraud detection system SyRI that was ruled in violation of fundamental rights (and its dangerous successor Super SyRI) is discussed, as well as the racist proctoring software we wrote about earlier.
Continue reading “Government: Stop using discriminatory algorithms”Covid-19 data: making racialised inequality in the Netherlands invisible
The CBS, the Dutch national statistics authority, issued a report in March showing that someone’s social economic status is a clear risk factor for dying of Covid-19. In an insightful piece, researchers Linnet Taylor and Tineke Broer criticise this report and show that the way in which the CBS collects and aggragates data on Covid-19 cases and deaths obfuscates the full extent of racialised or ethnic inequality in the impact of the pandemic.
Continue reading “Covid-19 data: making racialised inequality in the Netherlands invisible”Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands
In an opinion piece in Parool, The Racism and Technology Center wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour (see the English translation of the opinion piece). Earlier the center has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
Continue reading “Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands”