“It’s not a representation of me”: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services

Recent advances in artificial intelligence (AI) speech generation and voice cloning technologies have produced naturalistic speech and accurate voice replication, yet their influence on sociotechnical systems across diverse accents and linguistic traits is not fully understood. This study evaluates two synthetic AI voice services (Speechify and ElevenLabs) through a mixed methods approach using surveys and interviews to assess technical performance and uncover how users’ lived experiences influence their perceptions of accent variations in these speech technologies. Our findings reveal technical performance disparities across five regional, English-language accents and demonstrate how current speech generation technologies may inadvertently reinforce linguistic privilege and accent-based discrimination, potentially creating new forms of digital exclusion. Overall, our study highlights the need for inclusive design and regulation by providing actionable insights for developers, policymakers, and organizations to ensure equitable and socially responsible AI speech technologies.

By Avijit Ghosh, Christo Wilson, Jeffrey Gleason, Sarah Elizabeth Gillespie, Shira Michel, and Sufi Kaur for ACM Digital Library on June 23, 2025

TIDEs: A Transgender and Nonbinary Community-Labeled Dataset and Model for Transphobia Identification in Digital Environments

Transphobic rhetoric is a prevalent problem on social media that existing platform policies fail to meaningfully address. As such, trans people often create or adopt technologies independent from (but deployed within) platforms that help them mitigate the effects of facing transphobia online. In this paper, we introduce TIDEs (Transphobia Identification in Digital Environments), a dataset and model for detecting transphobic speech to contribute to the growing space of trans technologies for content moderation. We outline care-centered data practices, a methodology for constructing and labeling datasets for hate speech classification, which we developed while working closely with trans and nonbinary data annotators. Our fine-tuned DeBERTa model succeeds at detecting several ideologically distinct types of transphobia, achieving an F1 score of 0.81. As a publicly available dataset and model, TIDEs can serve as the base for future trans technologies and research that confronts and addresses the problem of online transphobia. Our results suggest that downstream applications of TIDEs may be deployable for reducing online harm for trans people.

By Dallas Card, Eric Gilbert, Francesca Lameiro, Lavinia Dunagan, and Oliver Haimson for ACM Digital Library on June 23, 2025

Dangerous Criminals and Beautiful Prostitutes? Investigating Harmful Representations in Dutch Language Models

While language-based AI is becoming increasingly popular, ensuring that these systems are socially responsible is essential. Despite their growing impact, large language models (LLMs), the engines of many language-driven applications, remain largely in the black box. Concerns about LLMs reinforcing harmful representations are shared by academia, industries, and the public. In professional contexts, researchers rely on LLMs for computational tasks such as text classification and contextual prediction, during which the risk of perpetuating biases cannot be overlooked. In a broader society where LLM-powered tools are widely accessible, interacting with biased models can shape public perceptions and behaviors, potentially reinforcing problematic social issues over time. This study investigates harmful representations in LLMs, focusing on ethnicity and gender in the Dutch context. Through template-based sentence construction and model probing, we identified potentially harmful representations using both automated and manual content analysis at the lexical and sentence levels, combining quantitative measurements with qualitative insights. Our findings have important ethical, legal, and political implications, challenging the acceptability of such harmful representations and emphasizing the need for effective mitigation strategies.

By Claes de Vreese, Gabriela Trogrlic, Natali Helberger, and Zilin Lin for ACM Digital Library on June 23, 2025

New York City uses a secret Child Welfare Algorithm

New York City’s Administration for Children’s Services (ACS) has been secretly using an AI risk assessment system since 2018 to flag families for additional investigation. This Markup investigation reveals how this algorithm mainly affects families of colour and raises serious questions about algorithmic bias against racialised and poor families in child welfare.

Continue reading “New York City uses a secret Child Welfare Algorithm”

Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster

Amsterdam officials’ technosolutionist way of thinking struck once again: they believed they could build technology that would prevent fraud while protecting citizens’ rights through their “Smart Check” AI system.

Continue reading “Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster”

Navigeren in niemandsland

De Wetenschappelijke Adviesraad Politie (WARP) adviseert de korpschef van de politie over zeven urgente uitdagingen rondom digitalisering en AI in politiewerk.

From Wetenschappelijke Adviesraad Politie on June 5, 2025

Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa

For a day or so, Musk’s Grok AI chatbot would add its belief in a “white genocide” in South Africa, by now a classic white supremacist fabrication, to nearly every answer it would give regardless of the question asked.

Continue reading “Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa”

‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP

We all know that racist algorithms can harm people across many sectors, and healthcare is no exception. In a powerful commentary published in CellPress, Ferryman et al. argue that racism must be treated as a core ethical issue in healthcare AI, not merely a flaw to be patched after deployment.

Continue reading “‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP”

Can Humanity Survive AI?

With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.

By Garrison Lovely for Jacobin on January 22, 2025

Racist Technology in Action: Grok’s total lack of safeguards against generating racist content

Grok is the chatbot made by xAI, a startup founded by Elon Musk, and is the generative AI solution that is powering X (née Twitter). It has recently gained a new power to generate photorealistic images, including those of celebrities. This is a problem as its ‘guardrails’ are lacking: it willingly generates racist and other deeply problematic images.

Continue reading “Racist Technology in Action: Grok’s total lack of safeguards against generating racist content”

Why ‘open’ AI systems are actually closed, and why this matters

This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.

By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024

Sweden’s Suspicion Machine

Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented.

By Ahmed Abdigadir, Anna Tiberg, Daniel Howden, Eva Constantaras, Frederick Laurin, Gabriel Geiger, Henrik Malmsten, Iben Ljungmark, Justin-Casimir Braun, Sascha Granberg, and Thomas Molén for Lighthouse Reports on November 27, 2024

The New Artificial Intelligentsia

In the fifth essay of the Legacies of Eugenics series, Ruha Benjamin explores how AI evangelists wrap their self-interest in a cloak of humanistic concern.

By Ruha Benjamin for Los Angeles Review of Books on October 18, 2024

Beyond Surveillance – The Case Against AI Detection and AI Proctoring

Are you an educator seeking a supportive space to critically examine AI surveillance tools? This workshop is for you. In an era where AI increasingly pervades education, AI detection and proctoring have sparked significant controversy. These tools, categorized as academic surveillance software, algorithmically monitor behaviour and movements. Students are increasingly forced to face them. Together, we will move beyond surveillance toward a culture of trust and transparency, shining a light on the black box of surveillance and discussing our findings. In this two-hour workshop, we will explore AI detection and proctoring through a 40-minute presentation, an hour of activities and discussion, and 20 minutes of group tool evaluation using a rubric.

By Ian Linkletter for BCcampus on September 18, 2024

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑