As I write this piece, an Israeli airstrike has hit makeshift tents near Al-Aqsa Martyrs Hospital in Deir al Balah, burning tents and people alive. The Israeli military bombed an aid distribution point in Jabalia, wounding 50 casualties who were waiting for flour. The entire north of Gaza has been besieged by the Israeli Occupying Forces for the past 10 days, trapping 400,000 Palestinians without food, drink, and medical supplies. Every day since last October, Israel, with the help of its western allies, intensifies its assault on Palestine, each time pushing the boundaries of what is comprehensible. There are no moral or legal boundaries Israel, and its allies, will not cross. The systematic ethnic cleansing of Palestine, which has been the basis of the settler-colonial Zionist project since its inception, has accelerated since 7th October 2023. From Palestine to Lebanon, Syria and Yemen, Israel and its allies continue their violence with impunity. Meanwhile, mainstream western news media are either silent in their reporting or complicit in abetting the ongoing destruction of the Palestinian people and the resistance.
Continue reading “Tech companies’ complicity in the ongoing genocide in Gaza and Palestine”Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people
Since 2021, thousands of Amazon and Google tech workers have been organising against Project Nimbus, Google and Amazon’s shared USD$1.2 billion contract with the Israeli government and military. Since then, there has been no response from management or executive. Their organising efforts have accelerated since 7 October 2023, with the ongoing genocide on Gaza and occupied Palestinian territories by the Israeli state.
Continue reading “Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people”“Mowing the lawn”: The weaponisation of water and technology in Palestine
In the most recent issue of Logic(s) Magazine, Edward Ongweso Jr. writes about Israel’s strategy towards Gaza called “mowing the lawn”: bursts of horrifying violence – a collective punishment of Palestinian people – followed by “calmer” periods where survivors are left to bury the dead, and rebuild their infrastructure while Israel continues to deepen its occupation.
Continue reading ““Mowing the lawn”: The weaponisation of water and technology in Palestine”Racist Technology in Action: Meta systemically censors and silences Palestinian content globally
The censorship and silencing of Palestinian voices, and voices of those who support Palestine, is not new. However, since the escalation of Israel’s violence on the Gaza strip since 7 October 2023, the scale of censorship has significantly heightened, particular on social media platforms such as Instagram and Facebook. In December 2023, Human Rights Watch (HRW) released a 51-page report*, stating that Meta has engaged in systematic and global censorship of content related to Palestine since October 7th.
Continue reading “Racist Technology in Action: Meta systemically censors and silences Palestinian content globally”Automating apartheid in the Occupied Palestinian Territories
In this interview, Matt Mahmoudi explains the Amnesty report titled Automating Apartheid, which he contributed to. The report exposes how the Israeli authorities extensively use surveillance tools, facial recognition technologies, and networks of CCTV cameras to support, intensify and entrench their continued domination and oppression of Palestinians in the Occupied Territories (OPT), Hebron and East Jerusalem. Facial recognition software is used by Israeli authorities to consolidate existing practices of discriminatory policing and segregation, violating Palestinians’ basic rights.
Continue reading “Automating apartheid in the Occupied Palestinian Territories”Use of machine translation tools exposes already vulnerable asylum seekers to even more risks
The use of and reliance on machine translation tools in asylum seeking procedures has become increasingly common amongst government contractors and organisations working with refugees and migrants. This Guardian article highlights many of the issues documented by Respond Crisis Translation, a network of people who provide urgent interpretation services for migrants and refugees. The problems with machine translation tools occur throughout the asylum process, from border stations to detention centers to immigration courts.
Continue reading “Use of machine translation tools exposes already vulnerable asylum seekers to even more risks”Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands
This collaborative investigative effort by Spotlight Bureau, Lighthouse Reports and Follow the Money, dives into the story of a Moroccan-Dutch family in Veenendaal which was targeted for fraud by the Dutch government.
Continue reading “Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands”Filipino workers in “digital sweatshops” train AI models for the West
The Philippines is one of the countries that has more than two million people perform crowdwork, such as data annotation, according to informal government estimates.
Continue reading “Filipino workers in “digital sweatshops” train AI models for the West”Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures
Black artists have been tinkering with machine learning algorithms in their artistic projects, surfacing many questions about the troubling relationship between AI and race, as reported in the New York Times.
Continue reading “Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures”France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?
Many governments are using mass surveillance to support law enforcement for the purposes of safety and security. In France, the French Parliament (and before, the French Senate) have approved the use of automated behavioural video surveillance at the 2024 Paris Olympics. Simply put, France wants to legalise mass surveillance at the national level which can violate many rights, such as the freedom of assembly and association, privacy, and non-discrimination.
Continue reading “France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?”Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities
Bloomberg’s researchers used Stable Diffusion to gauge the magnitude of biases in generative AI. Through an analysis of more than 5,000 images created by Stable Diffusion, they have found that it takes racial and gender disparities to extremes. The results are worse than those found in the real world.
Continue reading “Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities”Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem
In this eloquent and haunting piece by Hito Steyerl, she weaves the ongoing narratives of the eugenicist history of statistics with its integration into machine learning. She elaborates why the attempts to eliminate bias in facial recognition technology through diversifying datasets obscures the root of the problem: machine learning and automation are fundamentally reliant on extracting and exploiting human labour.
Continue reading “Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem”AI hype: Unbearably white and male
In this (rightly) scathing article by Andrea Grimes, she denounces the “AI” hype by tech bros perpetuated by much of the mainstream media coverage on large language models, ChatGPT, and GPT4.
Continue reading “AI hype: Unbearably white and male”More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’
In an interview with Zoë Corbyn in the Guardian, data journalist and Associate Professor of Journalism, Meredith Broussard discusses her new book More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech.
Continue reading “More data will not solve bias in algorithmic systems: it’s a systemic issue, not a ‘glitch’”Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.
In another investigation by The Markup, significant racial disparities were found in the assessment system used by the Los Angeles Homeless Services Authority (LAHSA), the body responsible for coordinating homelessness services in Los Angeles. This assessment system is reliant on a tool, called the Vulnerability Index-Service Prioritisation Decision Assistance Tool, or VI-SPDAT, to score and assess whether people can qualify for subsidised permanent housing.
Continue reading “Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.”Denmark’s welfare fraud system reflects a deeply racist and exclusionary society
As part of a series of investigative reporting by Lighthouse Reports and WIRED, Gabriel Geiger has revealed some of the findings about the use of welfare fraud algorithms in Denmark. This comes in the trajectory of the increasing use of algorithmic systems to detect welfare fraud across European cities, or at least systems which are currently known.
Continue reading “Denmark’s welfare fraud system reflects a deeply racist and exclusionary society”The cheap, racialised, Kenyan workers making ChatGPT “safe”
Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.
Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs
This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.
Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”Profiting off Black bodies
Tiera Tanksley’s work seeks to better understand how forms of digitally mediated traumas, such as seeing images of Black people dead and dying on social media, are impacting Black girls’ mental and emotional wellness in the U.S. and Canada. Her fears were confirmed in her findings: Black girls report unprecedented levels of fear, depression, anxiety and chronic stress. Viewing Black people being killed by the state was deeply traumatic, with mental, emotional and physiological effects.
Continue reading “Profiting off Black bodies”Amsterdam’s Top400 project stigmatises and over-criminalises youths
A critical, in depth report on Top400 – a crime prevention project by the Amsterdam municipality – which targets and polices minors (between the ages of 12 to 23) has emphasised the stigmatising, discriminatory, and invasive effects of the Top400 on youths and their families.
Continue reading “Amsterdam’s Top400 project stigmatises and over-criminalises youths”AI innovation for whom, and at whose expense?
This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.
Continue reading “AI innovation for whom, and at whose expense?”Racist Technology in Action: Robot rapper spouts racial slurs and the N-word
In this bizarre yet unsurprising example of an AI-gone-wrong (or “rogue”), an artificially designed robot rapper, FN Meka, has been dropped by Capitol Music Group for racial slurs and the use of the N-word in his music.
Continue reading “Racist Technology in Action: Robot rapper spouts racial slurs and the N-word”Whitewashing call centre workers’ accents
Silicon Valley strikes again, with yet another techno-solutionist idea. Sanas, a speech technology startup founded by three former Stanford students, aims to alter the accents of call centre workers situated in countries such as India and the Philippines. The goal is to make them sound white and American. With a slide of a button, a call centre’s voice will be transformed into a slightly robotic, and unmistakeably white, American voice.
Continue reading “Whitewashing call centre workers’ accents”Exploited and silenced: Meta’s Black whistleblower in Nairobi
In 2019, a Facebook content moderator in Nairobi, Daniel Motaung, who was paid USD 2.20 per hour, was fired. He was working for one of Meta’s largest outsourcing partners in Africa, Sama, which brands itself as an “ethical AI” outsourcing company, and is headquartered in California. Motaung led a unionisation attempt with more than 100 colleagues, fighting for better wages and working conditions.
Continue reading “Exploited and silenced: Meta’s Black whistleblower in Nairobi”Racist Technology in Action: Turning a Black person, White
An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.
Continue reading “Racist Technology in Action: Turning a Black person, White”Centring communities in the fight against injustice
In this interview with OneWorld, Nani Jansen Reventlow reflects on the harmful uses of technology, perpetuated by private and public actors. Ranging from the Dutch child benefits scandal, to the use of proctoring in education and to ‘super SyRI’ in public services.
Continue reading “Centring communities in the fight against injustice”Silencing Black women in tech journalism
In this op-ed, Sydette Harry unpacks how the tech sector, particularly tech journalism, has largely failed to meaningfully listen and account for the experiences of Black women, a group that most often bears the brunt of the harmful and racist effects of technological “innovations”. While the role of tech journalism is supposedly to hold the tech industry accountable through access and insight, it has repeatedly failed to include Black people in their reporting, neither by hiring Black writers nor by addressing them seriously as an audience. Rather, their experiences and culture are often co-opted, silenced, unreported, and pushed out of newsrooms.
Continue reading “Silencing Black women in tech journalism”Exploitative labour is central to the infrastructure of AI
In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.
Continue reading “Exploitative labour is central to the infrastructure of AI”Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias
The development and use of AI and machine learning in healthcare is proliferating. A 2020 study has shown that chest X-ray datasets that are used to train diagnostic models are biased against certain racial, gender and socioeconomic groups.
Continue reading “Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias”Disinformation and anti-Blackness
In this issue of Logic, issue editor, J. Khadijah Abdurahman and André Brock Jr., associate professor of Black Digital Studies at Georgia Institute of Technology and the author of Distributed Blackness: African American Cybercultures converse about the history of disinformation from reconstruction to the present, and discuss “the unholy trinity of whiteness, modernity, and capitalism”.
Continue reading “Disinformation and anti-Blackness”Centering social injustice, de-centering tech
The Racism and Technology Center organised a panel titled Centering social injustice, de-centering tech: The case of the Dutch child benefits scandal and beyond at Privacy Camp 2022, a conference that brings together digital rights advocates, activists, academics and policymakers. Together with Merel Koning (Amnesty International), Nadia Benaissa (Bits of Freedom) and Sanne Stevens (Justice, Equity and Technology Table), the discussion used the Dutch child benefits scandal as an example to highlight issues of deeply rooted racism and discrimination in the public sector. The fixation on algorithms and automated decision-making systems tends to obscure these fundamental problems. Often, the use of technology by governments functions to normalise and rationalise existing racist and classist practices.
Continue reading “Centering social injustice, de-centering tech”Predictive policing constrains our possibilities for better futures
In the context of the use of crime predictive software in policing, Chris Gilliard reiterated in WIRED how data-driven policing systems and programs are fundamentally premised on the assumption that historical data about crimes determines the future.
Continue reading “Predictive policing constrains our possibilities for better futures”Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success
An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.
Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”Predictive policing reinforces and accelerates racial bias
The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.
Continue reading “Predictive policing reinforces and accelerates racial bias”Intentional or otherwise, surveillance systems serve existing power structures
In Wired, Chris Gilliard strings together an incisive account of the racist history of surveillance: from the invention of home security system to modern day surveillance devices and technologies, such as Amazon and Google’s suite of security products.
Continue reading “Intentional or otherwise, surveillance systems serve existing power structures”Digital Rights for All: harmed communities should be front and centre
Earlier this month, Digital Freedom Fund kicked off a series of online workshops of the ‘Digital Rights for All’ programme. In this post, Laurence Meyer details the reasons for this initiative with the fundamental aim of addressing why individuals and communities most affected by the harms of technologies are not centred in the advocacy, policy, and strategic litigation work on digital rights in Europe, and how to tackle challenges around funding, sustainable collaborations and language barriers.
Continue reading “Digital Rights for All: harmed communities should be front and centre”Racist Technology in Action: Facebook labels black men as ‘primates’
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.
Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”Big Tech is propped up by a globally exploited workforce
Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.
Continue reading “Big Tech is propped up by a globally exploited workforce”Uber-racist: Racial discrimination in dynamic pricing algorithms
Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).
Continue reading “Uber-racist: Racial discrimination in dynamic pricing algorithms”The use of racist technology is not inevitable, but a choice we make
Last month, we wrote a piece in Lilith Mag that builds on some of the examples we have previously highlighted – the Dutch childcare benefits scandal, the use of online proctoring software, and popular dating app Grindr – to underscore two central ideas.
Continue reading “The use of racist technology is not inevitable, but a choice we make”