Grok is the chatbot made by xAI, a startup founded by Elon Musk, and is the generative AI solution that is powering X (née Twitter). It has recently gained a new power to generate photorealistic images, including those of celebrities. This is a problem as its ‘guardrails’ are lacking: it willingly generates racist and other deeply problematic images.
Continue reading “Racist Technology in Action: Grok’s total lack of safeguards against generating racist content”‘Just the start’: X’s new AI software driving online racist abuse, experts warn
Amid reports of creation of fake racist images, Signify warns problem will get ‘so much worse’ over the next year.
By Raphael Boyd for The Guardian on January 13, 2025
Meta terminates its DEI programs days before Trump inauguration
Meta, fresh off announcement to end factchecking, follows McDonald’s and Walmart in rolling back diversity initiatives.
By Adria R Walker for The Guardian on January 10, 2025
She didn’t get an apartment because of an AI-generated score – and sued to help others avoid the same fate
Despite a stellar reference from a landlord of 17 years, Mary Louis was rejected after being screened by firm SafeRent
By Johana Bhuiyan for The Guardian on December 14, 2024
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach.
By Robert Booth for The Guardian on December 6, 2024
Why ‘open’ AI systems are actually closed, and why this matters
This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024
How this grassroots effort could make AI voices more diverse
A massive volunteer-led effort to collect training data in more languages, from people of more ages and genders, could help make the next generation of voice AI more inclusive and less exploitative.
By Melissa Heikkilä for MIT Technology Review on November 15, 2024
Sweden’s Suspicion Machine
Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented.
By Ahmed Abdigadir, Anna Tiberg, Daniel Howden, Eva Constantaras, Frederick Laurin, Gabriel Geiger, Henrik Malmsten, Iben Ljungmark, Justin-Casimir Braun, Sascha Granberg, and Thomas Molén for Lighthouse Reports on November 27, 2024
Ruha Benjamin on Eugenics 2.0
UC Berkeley recently discovered a fund established in 1975 to fund research into eugenics. Nowadays, our (avowed) perspective on this ideology has changed, so they repurposed the fund and commissioned a series on the legacies of eugenics for the LA Review of Books.
Continue reading “Ruha Benjamin on Eugenics 2.0”‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis
More than half of students are now using generative AI, casting a shadow over campuses as tutors and students turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system.
By Will Coldwell for The Guardian on December 15, 2024
The New Artificial Intelligentsia
In the fifth essay of the Legacies of Eugenics series, Ruha Benjamin explores how AI evangelists wrap their self-interest in a cloak of humanistic concern.
By Ruha Benjamin for Los Angeles Review of Books on October 18, 2024
AI vaak seksistisch en discriminerend: ‘Nóóít neutraler dan mens’
Slechts foto’s van mannen wanneer je ‘CEO’ opzoekt of gezichtsherkenning die niet werkt voor mensen van kleur: kunstmatige intelligentie is vaak seksistisch en discriminerend. Dat probleem vindt zijn oorsprong niet in AI, maar juist in de fysieke samenleving.
By Lisa O’Malley and Siri Beerends for Linda on September 24, 2024
Beyond Surveillance: The Case Against AI Proctoring & AI Detection
On September 18, 2024, as part of the BCcampus EdTech Sandbox Series, I presented my case against AI proctoring and AI detection. In this post you will learn about key points from my presentation and our discussion.
By Ian Linkletter for BCcampus on October 16, 2024
Falsely Flagged: The AI-Driven Discrimination Black Students Face
Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”Beyond Surveillance – The Case Against AI Detection and AI Proctoring
Are you an educator seeking a supportive space to critically examine AI surveillance tools? This workshop is for you. In an era where AI increasingly pervades education, AI detection and proctoring have sparked significant controversy. These tools, categorized as academic surveillance software, algorithmically monitor behaviour and movements. Students are increasingly forced to face them. Together, we will move beyond surveillance toward a culture of trust and transparency, shining a light on the black box of surveillance and discussing our findings. In this two-hour workshop, we will explore AI detection and proctoring through a 40-minute presentation, an hour of activities and discussion, and 20 minutes of group tool evaluation using a rubric.
By Ian Linkletter for BCcampus on September 18, 2024
Series: AI Colonialism
An investigation into how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.
From MIT Technology Review on April 19, 2022
Black Teens’ Schoolwork Twice As Likely To Be Falsely Flagged As AI-Generated
Black students are over twice as likely to be falsely accused of using AI tools to complete school assignments compared to their peers.
By Sara Keenan for POCIT on September 19, 2024
Het tempo waarop inheemse talen op dit moment verdwijnen is zorgelijk hoog
De helft van alle talen wordt momenteel met uitsterven bedreigd. De Sateré-Mawé in Brazilië willen dit voorkomen door hun taal te digitaliseren. Maar kan dit wel zonder Big Tech? En van wie is de taal eigenlijk?
By Sanne Bloemink for De Groene Amsterdammer on August 21, 2024
Why the AI revolution is leaving Africa behind
Large infrastructure gaps are creating a new digital divide.
From The Economist on July 25, 2024
AI zou in oorlogstijd burgerlevens sparen. In realiteit vallen er juist meer doden
Kunstmatige intelligentie zou ervoor zorgen dat er tijdens oorlogen minder burgerdoden vallen. In realiteit vallen er juist meer. Want waar mensen worden gereduceerd tot datapunten, voelt vuren al snel als objectief en correct.
By Lauren Gould, Linde Arentze, and Marijn Hoijtink for De Groene Amsterdammer on July 24, 2024
Why Stopping Algorithmic Inequality Requires Taking Race Into Account
Let us explain. With cats
By Aaron Sankin and Natasha Uzcátegui-Liggett for The Markup on July 18, 2024
Politica & AI: het belang van AI bewustzijn in het politieke landschap
Het thema van Prinsessendag 2024 is AI & Politica. Spreker Robin Pocornie ligt in haar column toe waarom dit thema zo veel invloed heeft op het huidige politieke landschap.
By Robin Pocornie for Nederlandse Vrouwenraad on August 14, 2024
Google Workers Revolt Over $1.2 Billion Israel Contract
Two Google workers have resigned and another was fired over a project providing AI and cloud services to the Israeli government and military.
By Billy Perrigo for Time on April 10, 2024
Meta and Lavender
A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same Whatsapp group as a suspected militant. Where are they getting this data? Is WhatsApp sharing it?
By Paul Biggar for Paul Biggar on April 16, 2024
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets
Israeli intelligence sources reveal use of ‘Lavender’ system in Gaza war and claim permission given to kill civilians in pursuit of low-ranking militants.
By Bethan McKernan and Harry Davies for The Guardian on April 3, 2024
Na het toeslagen schandaal, waarbij onder andere veel eenoudergezinnen en gezinnen met een migratieachtergrond onterecht van fraude werden beschuldigd, werd pijnlijk duidelijk dat niet alleen mensen discrimineren, maar algoritmes ook
Er werd beloofd dat deze systemen eerlijker zouden worden, maar uit het nieuwe jaarverslag van de Autoriteit Persoonsgegevens blijkt dat er sindsdien weinig is verbeterd. Algoritmes categoriseren mensen met bepaalde kenmerken nog steeds onterecht als risico. Noëlle Cecilia, medeoprichter van Brush AI (@ai.brush) was zondag te gast bij Mandy. Zij maakt algoritmes voor bedrijven en deed een jaar lang onderzoek naar de eerlijkheid en discriminatie ervan. Zij legt ons uit waarom de mindset moet veranderen bij het ontwikkelen van AI-systemen.
By Noëlle Cecilia for Instagram on July 9, 2024
Non-white American parents are embracing AI faster than white ones
The digital divide seems to have flipped.
From The Economist on June 27, 2024
War, Memes, Art, Protest, and Porn: Jail(break)ing Synthetic Imaginaries Under OpenAI ’s Content Policy Restrictions
Using the method of jail(break)ing to study how the visualities of sensitive issues transform under the gaze of OpenAI ’s GPT 4o, we found that: -Jail(break)ing takes place when the prompts force the model to combine jailing (transforming or fine-tuning content to comply with content restrictions) and jailbreaking (attempting to bypass or circumvent these restrictions). – Image-to-text generation allows more space for controversy than text-to-image. – Visual outputs reveal issue-specific and shared transformation patterns for charged, ambiguous, or divisive artefacts. – These patterns include foregrounding the background or ‘dressing up’ (porn), imitative disambiguation (memes), pink-washing (protest), cartoonization/anonymization (war), and exaggeration of style (art).
By Alexandra Rosca, Elena Pilipets, Energy Ng, Esmée Colbourne, Marina Loureiro, Marloes Geboers, and Riccardo Ventura for Digital Methods Initiative on August 6, 2024
Generative AI’s ability to ‘pink-wash’ Black and Queer protests
Using a very clever methodology, this year’s Digital Method Initiative Summer School participants show how generative AI models like OpenAI’s GTP-4o will “dress up” controversial topics when you push the model to work with controversial content, like war, protest, or porn.
Continue reading “Generative AI’s ability to ‘pink-wash’ Black and Queer protests”Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts
In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.
Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”Are you 80% angry and 2% sad? Why ‘emotional AI’ is fraught with problems
AI that purports to read our feelings may enhance user experience but concerns over misuse and bias mean the field is fraught with potential dangers.
By Ned Carter Miles for The Guardian on June 23, 2024
How generative AI tools represent EU politicians: in a biased way
Algorithm Watch experimented with three major generative AI tools, generating 8,700 images of politicians. They found that all these tools make an active effort to lessen bias, but that the way they attempt to do this is problematic.
Continue reading “How generative AI tools represent EU politicians: in a biased way”Podcast: Art as a prophetic activity for the future of AI
Our own Hans de Zwart was a guest in the ‘Met Nerds om Tafel’ podcast. With Karen Palmer (creator of Consensus Gentium, a film about surveillance that watches you back), they discussed the role of art and storytelling in getting us ready for the future.
Continue reading “Podcast: Art as a prophetic activity for the future of AI”Image generators are trying to hide their biases – and they make them worse
In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.
By Nicolas Kayser-Bril for AlgorithmWatch on May 29, 2024
I am not a typo
It’s time to correct autocorrect.
From I am not a typo
People with commonly autocorrected names call for tech firms to fix problem
‘I am not a typo’ campaign is calling for technology companies to make autocorrect less ‘western- and white-focused’.
By Robert Booth for The Guardian on May 22, 2024
Fouten herstellen we later wel: hoe de gemeente een dubieus algoritme losliet op Rotterdammers
Het was te mooi om waar te zijn: een algoritme om fraude in de bijstand op te sporen. Ondanks waarschuwingen bleef de gemeente Rotterdam er bijna vier jaar lang in geloven. Een handjevol ambtenaren, zich onvoldoende bewust van ethische risico’s, kon jarenlang ongestoord experimenteren met de data van kwetsbare mensen.
By Romy van Dijk and Saskia Klaassen for Vers Beton on October 23, 2023
LET OP, zegt de computer van Buitenlandse Zaken bij tienduizenden visumaanvragen. Is dat discriminatie?
Discriminerend algoritme: Volgens een onderzoek discrimineerde het algoritme dat Buitenlandse Zaken gebruikt om visumaanvragen te beoordelen. Uit onvrede met die conclusie vroeg het ministerie om een second opinion.
By Carola Houtekamer and Merijn Rengers for NRC on May 1, 2024
AI detection has no place in education
The ubiquitous availability of AI has made plagiarism detection software utterly useless, argues our Hans de Zwart in the Volkskrant.
Continue reading “AI detection has no place in education”The datasets to train AI models need more checks for harmful and illegal materials
This Atlantic conversation between Matteo Wong and Abeba Birhane touches on some critical issues surrounding the use of large datasets to train AI models.
Continue reading “The datasets to train AI models need more checks for harmful and illegal materials”