Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

By Dhruv Mehrotra, Eva Constantaras, Gabriel Geiger, Htet Aung and Justin-Casimir Braun for WIRED on March 6, 2023

Zo leerde een Rotterdams fraudealgoritme kwetsbare groepen te verdenken

Een algoritme waarmee de gemeente Rotterdam jarenlang bijstandsfraude voorspelde, rekende jonge moeders en mensen die slecht Nederlands spreken tot de hoogste risicogroepen. Zij hadden de grootste kans op een strenge controle door de gemeente. Dat blijkt uit onderzoek van Lighthouse Reports, Argos, Vers Beton en Follow the Money, waarbij journalisten voor het eerst een compleet fraudealgoritme in handen kregen.

By David Davidson and Tom Claessens for Follow the Money on March 6, 2023

Kunstmatige intelligentie moet in de pas marcheren van mensenrechten

Nederland wil graag een voorloper zijn in het gebruik van kunstmatige intelligentie in militaire situaties. Deze technologie kan echter leiden tot racisme en discriminatie. In een open brief roepen critici op tot een moratorium op het gebruik van kunstmatige intelligentie. Initiatiefnemer Oumaima Hajri legt uit waarom.

By Oumaima Hajri for De Kanttekening on February 22, 2023

Parables of AI in/from the Majority World: An Anthology

Encounters with data and AI require contending with the uncertainties of systems that are most often understood through their inputs and outputs. Storytelling is one way to reckon with and make sense of these uncertainties. So what stories can we tell about a world that has increasingly come to rely on AI-based, data-driven interventions to address social problems?

By Patrick Davison, Ranjit Singh and Rigoberto Lara Guzmán for Data & Society on December 7, 2022

The Algorithm Addiction

Mass profiling system SyRI resurfaces in the Netherlands despite ban and landmark court ruling.

By Allart van der Woude, Daniel Howden, David Davidson, Evaline Schot, Gabriel Geiger, Judith Konijn, Ludo Hekman, Marc Hijink, May Bulman and Saskia Adriaens for Lighthouse Reports on December 20, 2022

We come to bury ChatGPT, not to praise it.

Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in ‘The cat sat on the [BLANK]’). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it’s still a computational guessing game. ChatGPT is, in technical terms, a ‘bullshit generator’.

By Dan McQuillan for Dan McQuillan on February 6, 2023

Word embeddings quantify 100 years of gender and ethnic stereotypes

Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words. We demonstrate that word embeddings can be used as a powerful tool to quantify historical trends and social change. As specific applications, we develop metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Our framework opens up a fruitful intersection between machine learning and quantitative social science.

By Dan Jurafsky, James Zou, Londa Schiebinger and Nikhil Garg for PNAS on April 3, 2018

The cheap, racialised, Kenyan workers making ChatGPT “safe”

Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.

Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””

Quantifying bias in society with ChatGTP-like tools

ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.

Continue reading “Quantifying bias in society with ChatGTP-like tools”

Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs

This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.

Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”

Alliance Against Military AI

Civil society organisations urge the Dutch government to immediately establish a moratorium on developing AI systems in the military domain.

By Oumaima Hajri for Alliantie tegen militaire AI on February 15, 2023

The Costs of Connection – How Data is Colonizing Human Life and Appropriating it for Capitalism

A profound exploration of how the ceaseless extraction of information about our intimate lives is remaking both global markets and our very selves. The Costs of Connection represents an enormous step forward in our collective understanding of capitalism’s current stage, a stage in which the final colonial input is the raw data of human life. Challenging, urgent and bracingly original.

By Nick Couldry and Ulises A. Mejias for Colonized by Data

What’s at stake with losing (Black) Twitter and moving to (white) Mastodon?

The immanent demise of Twitter after Elon Musk’s takeover sparked an exodus of people leaving the platform, which is only expected to increase. The significant increase in hate speech, and general hostile atmosphere created by the erratic decrees by it’s owner (such as Trump’s reinstatement) made, in the New Yorker writer Jelani Cobb’s words, “remaining completely untenable”. This, often vocal, movement of people from the platform has sparked a debate on what people stand to loose and what the alternative is.

Continue reading “What’s at stake with losing (Black) Twitter and moving to (white) Mastodon?”

Ben jij straks werkloos door AI?

Het einde van 2022 stond in het teken van de AI-tools. Je maakt digitale kunstwerken met DALL-E, AI-profielfoto’s met Lensa en als klap op de vuurpijl genereer je binnen een paar seconden een hele sollicitatiebrief of essay via ChatGPT. Dat AI, of kunstmatige intelligentie, veel kan wisten we. Maar ChatGPT wordt echt gezien als een doorbraak. Wat is het? En worden wij overbodig door AI? Oh en Devran dacht trouwens lekker ontspannen het nieuwe jaar in te gaan met de chatbot, maar of dat nou zo’n goed idee was…

By Robin Pocornie for YouTube on December 31, 2022

Profiting off Black bodies

Tiera Tanksley’s work seeks to better understand how forms of digitally mediated traumas, such as seeing images of Black people dead and dying on social media, are impacting Black girls’ mental and emotional wellness in the U.S. and Canada. Her fears were confirmed in her findings: Black girls report unprecedented levels of fear, depression, anxiety and chronic stress. Viewing Black people being killed by the state was deeply traumatic, with mental, emotional and physiological effects.

Continue reading “Profiting off Black bodies”

Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course

Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.

Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”

I’m @Sinders on Mastodon but I’m not giving up on Twitter, yet

I’m sure you’ve seen the tweets, and the think pieces about how much worse Twitter is gonna get. My friend Justin Hendrix mentioned losing a few hundred followers in a case of a few hours, after Elon brought a sink into Twitter headquarters (which is the lamest bit I’ve ever seen- massive fail of a dad joke). A huge chunk of people I follow now have their Mastodon handles in their Twitter names. It’s a chunk of the influencers, academics, activists, and civil society folks- the researchers who I follow, who are actively mourning, and hand wringing, about the destruction that is to come, already in the throes of grief of the twitter that was. But the thing is- all of these folks are white.

By Caroline Sinders for Medium on October 31, 2022

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑