Silencing Black women in tech journalism

In this op-ed, Sydette Harry unpacks how the tech sector, particularly tech journalism, has largely failed to meaningfully listen and account for the experiences of Black women, a group that most often bears the brunt of the harmful and racist effects of technological “innovations”. While the role of tech journalism is supposedly to hold the tech industry accountable through access and insight, it has repeatedly failed to include Black people in their reporting, neither by hiring Black writers nor by addressing them seriously as an audience. Rather, their experiences and culture are often co-opted, silenced, unreported, and pushed out of newsrooms.

Continue reading “Silencing Black women in tech journalism”

Exploitative labour is central to the infrastructure of AI

In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.

Continue reading “Exploitative labour is central to the infrastructure of AI”

Disinformation and anti-Blackness

In this issue of Logic, issue editor, J. Khadijah Abdurahman and André Brock Jr., associate professor of Black Digital Studies at Georgia Institute of Technology and the author of Distributed Blackness: African American Cybercultures converse about the history of disinformation from reconstruction to the present, and discuss “the unholy trinity of whiteness, modernity, and capitalism”.

Continue reading “Disinformation and anti-Blackness”

Centering social injustice, de-centering tech

The Racism and Technology Center organised a panel titled Centering social injustice, de-centering tech: The case of the Dutch child benefits scandal and beyond at Privacy Camp 2022, a conference that brings together digital rights advocates, activists, academics and policymakers. Together with Merel Koning (Amnesty International), Nadia Benaissa (Bits of Freedom) and Sanne Stevens (Justice, Equity and Technology Table), the discussion used the Dutch child benefits scandal as an example to highlight issues of deeply rooted racism and discrimination in the public sector. The fixation on algorithms and automated decision-making systems tends to obscure these fundamental problems. Often, the use of technology by governments functions to normalise and rationalise existing racist and classist practices.

Continue reading “Centering social injustice, de-centering tech”

Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success

An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.

Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”

Predictive policing reinforces and accelerates racial bias

The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.

Continue reading “Predictive policing reinforces and accelerates racial bias”

Digital Rights for All: harmed communities should be front and centre

Earlier this month, Digital Freedom Fund kicked off a series of online workshops of the ‘Digital Rights for All’ programme. In this post, Laurence Meyer details the reasons for this initiative with the fundamental aim of addressing why individuals and communities most affected by the harms of technologies are not centred in the advocacy, policy, and strategic litigation work on digital rights in Europe, and how to tackle challenges around funding, sustainable collaborations and language barriers.

Continue reading “Digital Rights for All: harmed communities should be front and centre”

Racist Technology in Action: Facebook labels black men as ‘primates’

In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.

Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”

Big Tech is propped up by a globally exploited workforce

Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.

Continue reading “Big Tech is propped up by a globally exploited workforce”

Uber-racist: Racial discrimination in dynamic pricing algorithms

Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).

Continue reading “Uber-racist: Racial discrimination in dynamic pricing algorithms”

Human-in-the-loop is not the magic bullet to fix AI harms

In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.

Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”

AI and its hidden costs

In a recent interview with The Guardian, Kate Crawford discusses her new book, Atlas AI, that delves into the broader landscape of how AI systems work by canvassing the structures of production and material realities. One example is ImageNet, a massive training dataset created by researchers from Stanford, that is used to test whether object recognition algorithms are efficient. It was made by scraping photos and images across the web and hiring crowd workers to label them according to an outdated lexical database created in the 1980s.

Continue reading “AI and its hidden costs”

Racist and classist predictive policing exists in Europe too

The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.

Continue reading “Racist and classist predictive policing exists in Europe too”

Online hate and harassment continue to proliferate

A recent report by ADL, an anti-hate organisation in the US, has shown that social media platforms have consistently failed to prevent online hate and harassment. Despite the self-regulatory efforts made by social media companies, results from ADL’s annual survey shows that the level of online hate and harassment has barely shifted in the past three years. These online experiences disproportionately harm marginalised groups, with LGBTQI+, Asian-American, Jewish and African-American respondents reporting higher rates of various forms of harassment. Many of these problems are intrinsic to the ways in which the business models of social media platforms are optimised for maximum engagement, further exacerbating existing issues in society.

Continue reading “Online hate and harassment continue to proliferate”

Online proctoring excludes and discriminates

The use of software to automatically detect cheating on online exams – online proctoring – has been the go-to solution for many schools and universities in response to the COVID-19 pandemic. In this article, Shea Swauger addresses some of the potential discriminatory, privacy and security harms that can impact groups of students across class, gender, race, and disability lines. Swauger provides a critique on how technologies encode “normal” bodies – cisgender, white, able-bodied, neurotypical, male – as the standard and how students who do not (or cannot) conform, are punished by it.

Continue reading “Online proctoring excludes and discriminates”

Filtering out the “Asians”

The article’s title speaks for itself, “Your iPhone’s Adult Content Filter Blocks Anything ‘Asian’”. Victoria Song has tested the claims made by The Independent: if you enable the “Limit Adult Websites” function in your iPhone’s Screen Time setting, then you are blocked from seeing any Google search results for “Asian”. Related searches such as “Asian recipes,” or “Southeast Asian,” are also blocked by the adult content filter. There is no clarity or transparency to how search terms are considered adult content or not, and whether the process is automated or done manually. Regardless of intention, the outcome and the lack of action by Google or Apple is unsurprising but disconcerting. It is far from a mistake, but rather, a feature of their commercial practices and their disregard to the social harms of their business model.

Continue reading “Filtering out the “Asians””

Racist technology in action: Gun, or electronic device?

The answer to that question depends on your skin colour, apparently. An AlgorithmWatch reporter, Nicholas Kayser-Bril, conducted an experiment that went viral on Twitter, showing that Google Vision Cloud (a service which is based on a subset of AI known as “computer vision” that focuses on automated image labelling), labelled an image of a dark-skinned individual holding a thermometer with the word “gun”, whilst a lighter skinned individual was labelled holding an “electronic device”.

Continue reading “Racist technology in action: Gun, or electronic device?”

Corporatespeak and racial injustice

In light of the Black Lives Matter protests in the U.S. and protests against police brutality in Europe, technology companies have been quick to release corporate statements, commitments, campaigns and initiatives to tackle discrimination and racial injustice. Amber Hamilton evaluated 63 public facing documents from major technology companies such as Facebook, Instagram, Twitter, YouTube, Airbnb and TikTok.

Continue reading “Corporatespeak and racial injustice”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑