“Mowing the lawn”: The weaponisation of water and technology in Palestine

In the most recent issue of Logic(s) Magazine, Edward Ongweso Jr. writes about Israel’s strategy towards Gaza called “mowing the lawn”: bursts of horrifying violence – a collective punishment of Palestinian people – followed by “calmer” periods where survivors are left to bury the dead, and rebuild their infrastructure while Israel continues to deepen its occupation.

Continue reading ““Mowing the lawn”: The weaponisation of water and technology in Palestine”

Racist Technology in Action: Meta systemically censors and silences Palestinian content globally

The censorship and silencing of Palestinian voices, and voices of those who support Palestine, is not new. However, since the escalation of Israel’s violence on the Gaza strip since 7 October 2023, the scale of censorship has significantly heightened, particular on social media platforms such as Instagram and Facebook. In December 2023, Human Rights Watch (HRW) released a 51-page report*, stating that Meta has engaged in systematic and global censorship of content related to Palestine since October 7th.

Continue reading “Racist Technology in Action: Meta systemically censors and silences Palestinian content globally”

Automating apartheid in the Occupied Palestinian Territories

In this interview, Matt Mahmoudi explains the Amnesty report titled Automating Apartheid, which he contributed to. The report exposes how the Israeli authorities extensively use surveillance tools, facial recognition technologies, and networks of CCTV cameras to support, intensify and entrench their continued domination and oppression of Palestinians in the Occupied Territories (OPT), Hebron and East Jerusalem. Facial recognition software is used by Israeli authorities to consolidate existing practices of discriminatory policing and segregation, violating Palestinians’ basic rights.

Continue reading “Automating apartheid in the Occupied Palestinian Territories”

Use of machine translation tools exposes already vulnerable asylum seekers to even more risks

The use of and reliance on machine translation tools in asylum seeking procedures has become increasingly common amongst government contractors and organisations working with refugees and migrants. This Guardian article highlights many of the issues documented by Respond Crisis Translation, a network of people who provide urgent interpretation services for migrants and refugees. The problems with machine translation tools occur throughout the asylum process, from border stations to detention centers to immigration courts.

Continue reading “Use of machine translation tools exposes already vulnerable asylum seekers to even more risks”

France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?

Many governments are using mass surveillance to support law enforcement for the purposes of safety and security. In France, the French Parliament (and before, the French Senate) have approved the use of automated behavioural video surveillance at the 2024 Paris Olympics. Simply put, France wants to legalise mass surveillance at the national level which can violate many rights, such as the freedom of assembly and association, privacy, and non-discrimination.

Continue reading “France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?”

Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities

Bloomberg’s researchers used Stable Diffusion to gauge the magnitude of biases in generative AI. Through an analysis of more than 5,000 images created by Stable Diffusion, they have found that it takes racial and gender disparities to extremes. The results are worse than those found in the real world.

Continue reading “Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities”

Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem

In this eloquent and haunting piece by Hito Steyerl, she weaves the ongoing narratives of the eugenicist history of statistics with its integration into machine learning. She elaborates why the attempts to eliminate bias in facial recognition technology through diversifying datasets obscures the root of the problem: machine learning and automation are fundamentally reliant on extracting and exploiting human labour.

Continue reading “Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem”

Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.

In another investigation by The Markup, significant racial disparities were found in the assessment system used by the Los Angeles Homeless Services Authority (LAHSA), the body responsible for coordinating homelessness services in Los Angeles. This assessment system is reliant on a tool, called the Vulnerability Index-Service Prioritisation Decision Assistance Tool, or VI-SPDAT, to score and assess whether people can qualify for subsidised permanent housing.

Continue reading “Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.”

Denmark’s welfare fraud system reflects a deeply racist and exclusionary society

As part of a series of investigative reporting by Lighthouse Reports and WIRED, Gabriel Geiger has revealed some of the findings about the use of welfare fraud algorithms in Denmark. This comes in the trajectory of the increasing use of algorithmic systems to detect welfare fraud across European cities, or at least systems which are currently known.

Continue reading “Denmark’s welfare fraud system reflects a deeply racist and exclusionary society”

The cheap, racialised, Kenyan workers making ChatGPT “safe”

Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.

Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””

Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs

This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.

Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”

Profiting off Black bodies

Tiera Tanksley’s work seeks to better understand how forms of digitally mediated traumas, such as seeing images of Black people dead and dying on social media, are impacting Black girls’ mental and emotional wellness in the U.S. and Canada. Her fears were confirmed in her findings: Black girls report unprecedented levels of fear, depression, anxiety and chronic stress. Viewing Black people being killed by the state was deeply traumatic, with mental, emotional and physiological effects.

Continue reading “Profiting off Black bodies”

AI innovation for whom, and at whose expense?

This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.

Continue reading “AI innovation for whom, and at whose expense?”

Whitewashing call centre workers’ accents

Silicon Valley strikes again, with yet another techno-solutionist idea. Sanas, a speech technology startup founded by three former Stanford students, aims to alter the accents of call centre workers situated in countries such as India and the Philippines. The goal is to make them sound white and American. With a slide of a button, a call centre’s voice will be transformed into a slightly robotic, and unmistakeably white, American voice.

Continue reading “Whitewashing call centre workers’ accents”

Exploited and silenced: Meta’s Black whistleblower in Nairobi

In 2019, a Facebook content moderator in Nairobi, Daniel Motaung, who was paid USD 2.20 per hour, was fired. He was working for one of Meta’s largest outsourcing partners in Africa, Sama, which brands itself as an “ethical AI” outsourcing company, and is headquartered in California. Motaung led a unionisation attempt with more than 100 colleagues, fighting for better wages and working conditions.

Continue reading “Exploited and silenced: Meta’s Black whistleblower in Nairobi”

Racist Technology in Action: Turning a Black person, White

An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.

Continue reading “Racist Technology in Action: Turning a Black person, White”

Silencing Black women in tech journalism

In this op-ed, Sydette Harry unpacks how the tech sector, particularly tech journalism, has largely failed to meaningfully listen and account for the experiences of Black women, a group that most often bears the brunt of the harmful and racist effects of technological “innovations”. While the role of tech journalism is supposedly to hold the tech industry accountable through access and insight, it has repeatedly failed to include Black people in their reporting, neither by hiring Black writers nor by addressing them seriously as an audience. Rather, their experiences and culture are often co-opted, silenced, unreported, and pushed out of newsrooms.

Continue reading “Silencing Black women in tech journalism”

Exploitative labour is central to the infrastructure of AI

In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.

Continue reading “Exploitative labour is central to the infrastructure of AI”

Disinformation and anti-Blackness

In this issue of Logic, issue editor, J. Khadijah Abdurahman and André Brock Jr., associate professor of Black Digital Studies at Georgia Institute of Technology and the author of Distributed Blackness: African American Cybercultures converse about the history of disinformation from reconstruction to the present, and discuss “the unholy trinity of whiteness, modernity, and capitalism”.

Continue reading “Disinformation and anti-Blackness”

Centering social injustice, de-centering tech

The Racism and Technology Center organised a panel titled Centering social injustice, de-centering tech: The case of the Dutch child benefits scandal and beyond at Privacy Camp 2022, a conference that brings together digital rights advocates, activists, academics and policymakers. Together with Merel Koning (Amnesty International), Nadia Benaissa (Bits of Freedom) and Sanne Stevens (Justice, Equity and Technology Table), the discussion used the Dutch child benefits scandal as an example to highlight issues of deeply rooted racism and discrimination in the public sector. The fixation on algorithms and automated decision-making systems tends to obscure these fundamental problems. Often, the use of technology by governments functions to normalise and rationalise existing racist and classist practices.

Continue reading “Centering social injustice, de-centering tech”

Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success

An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.

Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”

Predictive policing reinforces and accelerates racial bias

The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.

Continue reading “Predictive policing reinforces and accelerates racial bias”

Digital Rights for All: harmed communities should be front and centre

Earlier this month, Digital Freedom Fund kicked off a series of online workshops of the ‘Digital Rights for All’ programme. In this post, Laurence Meyer details the reasons for this initiative with the fundamental aim of addressing why individuals and communities most affected by the harms of technologies are not centred in the advocacy, policy, and strategic litigation work on digital rights in Europe, and how to tackle challenges around funding, sustainable collaborations and language barriers.

Continue reading “Digital Rights for All: harmed communities should be front and centre”

Racist Technology in Action: Facebook labels black men as ‘primates’

In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.

Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”

Big Tech is propped up by a globally exploited workforce

Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.

Continue reading “Big Tech is propped up by a globally exploited workforce”

Uber-racist: Racial discrimination in dynamic pricing algorithms

Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).

Continue reading “Uber-racist: Racial discrimination in dynamic pricing algorithms”

Human-in-the-loop is not the magic bullet to fix AI harms

In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.

Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑