Trial of technology comes as official report warns existing system has been failing for at least a decade.
By Kiran Stacey for The Guardian on July 22, 2025
Trial of technology comes as official report warns existing system has been failing for at least a decade.
By Kiran Stacey for The Guardian on July 22, 2025
Internal Google documents show that the tech giant feared it wouldn’t be able to monitor how Israel might use its technology to harm Palestinians.
By Sam Biddle for The Intercept on May 12, 2025
Recent advances in artificial intelligence (AI) speech generation and voice cloning technologies have produced naturalistic speech and accurate voice replication, yet their influence on sociotechnical systems across diverse accents and linguistic traits is not fully understood. This study evaluates two synthetic AI voice services (Speechify and ElevenLabs) through a mixed methods approach using surveys and interviews to assess technical performance and uncover how users’ lived experiences influence their perceptions of accent variations in these speech technologies. Our findings reveal technical performance disparities across five regional, English-language accents and demonstrate how current speech generation technologies may inadvertently reinforce linguistic privilege and accent-based discrimination, potentially creating new forms of digital exclusion. Overall, our study highlights the need for inclusive design and regulation by providing actionable insights for developers, policymakers, and organizations to ensure equitable and socially responsible AI speech technologies.
By Avijit Ghosh, Christo Wilson, Jeffrey Gleason, Sarah Elizabeth Gillespie, Shira Michel, and Sufi Kaur for ACM Digital Library on June 23, 2025
Transphobic rhetoric is a prevalent problem on social media that existing platform policies fail to meaningfully address. As such, trans people often create or adopt technologies independent from (but deployed within) platforms that help them mitigate the effects of facing transphobia online. In this paper, we introduce TIDEs (Transphobia Identification in Digital Environments), a dataset and model for detecting transphobic speech to contribute to the growing space of trans technologies for content moderation. We outline care-centered data practices, a methodology for constructing and labeling datasets for hate speech classification, which we developed while working closely with trans and nonbinary data annotators. Our fine-tuned DeBERTa model succeeds at detecting several ideologically distinct types of transphobia, achieving an F1 score of 0.81. As a publicly available dataset and model, TIDEs can serve as the base for future trans technologies and research that confronts and addresses the problem of online transphobia. Our results suggest that downstream applications of TIDEs may be deployable for reducing online harm for trans people.
By Dallas Card, Eric Gilbert, Francesca Lameiro, Lavinia Dunagan, and Oliver Haimson for ACM Digital Library on June 23, 2025
While language-based AI is becoming increasingly popular, ensuring that these systems are socially responsible is essential. Despite their growing impact, large language models (LLMs), the engines of many language-driven applications, remain largely in the black box. Concerns about LLMs reinforcing harmful representations are shared by academia, industries, and the public. In professional contexts, researchers rely on LLMs for computational tasks such as text classification and contextual prediction, during which the risk of perpetuating biases cannot be overlooked. In a broader society where LLM-powered tools are widely accessible, interacting with biased models can shape public perceptions and behaviors, potentially reinforcing problematic social issues over time. This study investigates harmful representations in LLMs, focusing on ethnicity and gender in the Dutch context. Through template-based sentence construction and model probing, we identified potentially harmful representations using both automated and manual content analysis at the lexical and sentence levels, combining quantitative measurements with qualitative insights. Our findings have important ethical, legal, and political implications, challenging the acceptability of such harmful representations and emphasizing the need for effective mitigation strategies.
By Claes de Vreese, Gabriela Trogrlic, Natali Helberger, and Zilin Lin for ACM Digital Library on June 23, 2025
An Experimental Approach Using Synthetic Faces and Human Evaluation.
From hliang2 on April 27, 2023
New York City’s Administration for Children’s Services (ACS) has been secretly using an AI risk assessment system since 2018 to flag families for additional investigation. This Markup investigation reveals how this algorithm mainly affects families of colour and raises serious questions about algorithmic bias against racialised and poor families in child welfare.
Continue reading “New York City uses a secret Child Welfare Algorithm”Amsterdam officials’ technosolutionist way of thinking struck once again: they believed they could build technology that would prevent fraud while protecting citizens’ rights through their “Smart Check” AI system.
Continue reading “Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster”Amsterdam’s struggles with its welfare fraud algorithm show us the stakes of deploying AI in situations that directly affect human lives.
By Eileen Guo and Hans de Zwart for MIT Technology Review on June 17, 2025
How a family’s neighborhood, age, and mental health might get their case a deeper look.
By Colin Lecher for The Markup on May 20, 2025
Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.
By Amanda Silverman, Eileen Guo, Eva Constantaras, Gabriel Geiger, and Justin-Casimir Braun for Lighthouse Reports on June 11, 2025
The powerful new AI model is designed to analyze intercepted communications – but experts say such systems can exacerbate biases and are prone to making mistakes.
By Harry Davies and Yuval Abraham for The Guardian on March 6, 2025
De Wetenschappelijke Adviesraad Politie (WARP) adviseert de korpschef van de politie over zeven urgente uitdagingen rondom digitalisering en AI in politiewerk.
From Wetenschappelijke Adviesraad Politie on June 5, 2025
Al vaker ging de overheid de mist in met algoritmes bedoeld om uitkeringsfraude te bestrijden. De gemeente Amsterdam wilde het allemaal anders doen, maar kwam erachter: een ethisch algoritme is een illusie.
By Hans de Zwart and Jeroen van Raalte for Trouw on June 6, 2025
Just don’t ask it about “white genocide.”
By Zeynep Tufekci for The New York Times on May 17, 2025
For a day or so, Musk’s Grok AI chatbot would add its belief in a “white genocide” in South Africa, by now a classic white supremacist fabrication, to nearly every answer it would give regardless of the question asked.
Continue reading “Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa”Google’s $32bn purchase of Israeli cloud security company Wiz is its most expensive ever acquisition.
By Areeb Ullah for Middle East Eye on March 20, 2025
Users asking Elon Musk’s Grok chatbot on X for information about baseball, HBO Max or even a cat playing in a sink received some … curious responses Wednesday.
By Derek Robertson for POLITICO on May 15, 2025
What in the world just happened with Elon Musk’s chatbot?
By Ali Breland and Matteo Wong for The Atlantic on May 15, 2025
We all know that racist algorithms can harm people across many sectors, and healthcare is no exception. In a powerful commentary published in CellPress, Ferryman et al. argue that racism must be treated as a core ethical issue in healthcare AI, not merely a flaw to be patched after deployment.
Continue reading “‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP”State department launches AI-assisted reviews of accounts to look for what it perceives as Hamas supporters.
From The Guardian on March 7, 2025
The LA Times is adding AI-written alternative points of view to opinion columns, editorials, and commentary. To the surprise of no one, this AI then diminished the hate-drivenness of the Ku Klux Klan as a movement.
Continue reading “Racist Technology in Action: An LA Times AI tool defending the KKK”The LA Times is laying off and buying out staff, while introducing highly dubious AI tools. This is what automation looks like in 2025.
By Brian Merchant for Blood in the Machine on March 8, 2025
The company said it was working to fix the problem after iPhone users began reporting the issue.
By Eli Tan and Tripp Mickle for The New York Times on February 25, 2025
With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.
By Garrison Lovely for Jacobin on January 22, 2025
Grok is the chatbot made by xAI, a startup founded by Elon Musk, and is the generative AI solution that is powering X (née Twitter). It has recently gained a new power to generate photorealistic images, including those of celebrities. This is a problem as its ‘guardrails’ are lacking: it willingly generates racist and other deeply problematic images.
Continue reading “Racist Technology in Action: Grok’s total lack of safeguards against generating racist content”Amid reports of creation of fake racist images, Signify warns problem will get ‘so much worse’ over the next year.
By Raphael Boyd for The Guardian on January 13, 2025
Meta, fresh off announcement to end factchecking, follows McDonald’s and Walmart in rolling back diversity initiatives.
By Adria R Walker for The Guardian on January 10, 2025
Despite a stellar reference from a landlord of 17 years, Mary Louis was rejected after being screened by firm SafeRent
By Johana Bhuiyan for The Guardian on December 14, 2024
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach.
By Robert Booth for The Guardian on December 6, 2024
This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024
A massive volunteer-led effort to collect training data in more languages, from people of more ages and genders, could help make the next generation of voice AI more inclusive and less exploitative.
By Melissa Heikkilä for MIT Technology Review on November 15, 2024
Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented.
By Ahmed Abdigadir, Anna Tiberg, Daniel Howden, Eva Constantaras, Frederick Laurin, Gabriel Geiger, Henrik Malmsten, Iben Ljungmark, Justin-Casimir Braun, Sascha Granberg, and Thomas Molén for Lighthouse Reports on November 27, 2024
UC Berkeley recently discovered a fund established in 1975 to fund research into eugenics. Nowadays, our (avowed) perspective on this ideology has changed, so they repurposed the fund and commissioned a series on the legacies of eugenics for the LA Review of Books.
Continue reading “Ruha Benjamin on Eugenics 2.0”More than half of students are now using generative AI, casting a shadow over campuses as tutors and students turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system.
By Will Coldwell for The Guardian on December 15, 2024
In the fifth essay of the Legacies of Eugenics series, Ruha Benjamin explores how AI evangelists wrap their self-interest in a cloak of humanistic concern.
By Ruha Benjamin for Los Angeles Review of Books on October 18, 2024
Slechts foto’s van mannen wanneer je ‘CEO’ opzoekt of gezichtsherkenning die niet werkt voor mensen van kleur: kunstmatige intelligentie is vaak seksistisch en discriminerend. Dat probleem vindt zijn oorsprong niet in AI, maar juist in de fysieke samenleving.
By Lisa O’Malley and Siri Beerends for Linda on September 24, 2024
On September 18, 2024, as part of the BCcampus EdTech Sandbox Series, I presented my case against AI proctoring and AI detection. In this post you will learn about key points from my presentation and our discussion.
By Ian Linkletter for BCcampus on October 16, 2024
Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”Are you an educator seeking a supportive space to critically examine AI surveillance tools? This workshop is for you. In an era where AI increasingly pervades education, AI detection and proctoring have sparked significant controversy. These tools, categorized as academic surveillance software, algorithmically monitor behaviour and movements. Students are increasingly forced to face them. Together, we will move beyond surveillance toward a culture of trust and transparency, shining a light on the black box of surveillance and discussing our findings. In this two-hour workshop, we will explore AI detection and proctoring through a 40-minute presentation, an hour of activities and discussion, and 20 minutes of group tool evaluation using a rubric.
By Ian Linkletter for BCcampus on September 18, 2024
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.