The AI-fueled chatbot gives answers that can seem human-sounding. They may also share humans’ bias.
From CBS News on March 6, 2023
The AI-fueled chatbot gives answers that can seem human-sounding. They may also share humans’ bias.
From CBS News on March 6, 2023
Raising funds from investors is unfavorable for marginalized founders, who face racial bias in the world of venture capital.
By Miranda Perez for The Guardian on March 12, 2023
Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.
By Dhruv Mehrotra, Eva Constantaras, Gabriel Geiger, Htet Aung and Justin-Casimir Braun for WIRED on March 6, 2023
Een algoritme waarmee de gemeente Rotterdam jarenlang bijstandsfraude voorspelde, rekende jonge moeders en mensen die slecht Nederlands spreken tot de hoogste risicogroepen. Zij hadden de grootste kans op een strenge controle door de gemeente. Dat blijkt uit onderzoek van Lighthouse Reports, Argos, Vers Beton en Follow the Money, waarbij journalisten voor het eerst een compleet fraudealgoritme in handen kregen.
By David Davidson and Tom Claessens for Follow the Money on March 6, 2023
Robin Pocornie was featured in the Dutch current affairs programme EenVandaag. Professor Sennay Ghebreab and former Member of Parliament Kees Verhoeven provided expertise and commentary.
Continue reading “First Dutch citizen proves that an algorithm discriminated against her on the basis of her skin colour”Volgens OpenAI en Google kan kunstmatige intelligentie de hele mensheid ten goede komen. Maar uit onderzoek blijkt hoe eenzijdig en beperkt de meeste data zijn waarmee AI is getraind. Volgens onderzoeker Balázs Bodó is dat reden om op de grote rode pauzeknop te drukken.
By Balázs Bodó and Maurits Martijn for De Correspondent on February 22, 2023
Giften aan islamitische instellingen werden door de Belastingdienst jarenlang bij voorbaat als verdacht bestempeld. Deze ‘institutionele islamofobie’ werd aangewakkerd door een schandaal dat nu eindelijk onder de rechter is.
By Marco de Vries for De Groene Amsterdammer on February 22, 2023
Nederland wil graag een voorloper zijn in het gebruik van kunstmatige intelligentie in militaire situaties. Deze technologie kan echter leiden tot racisme en discriminatie. In een open brief roepen critici op tot een moratorium op het gebruik van kunstmatige intelligentie. Initiatiefnemer Oumaima Hajri legt uit waarom.
By Oumaima Hajri for De Kanttekening on February 22, 2023
Encounters with data and AI require contending with the uncertainties of systems that are most often understood through their inputs and outputs. Storytelling is one way to reckon with and make sense of these uncertainties. So what stories can we tell about a world that has increasingly come to rely on AI-based, data-driven interventions to address social problems?
By Patrick Davison, Ranjit Singh and Rigoberto Lara Guzmán for Data & Society on December 7, 2022
Mass profiling system SyRI resurfaces in the Netherlands despite ban and landmark court ruling.
By Allart van der Woude, Daniel Howden, David Davidson, Evaline Schot, Gabriel Geiger, Judith Konijn, Ludo Hekman, Marc Hijink, May Bulman and Saskia Adriaens for Lighthouse Reports on December 20, 2022
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in ‘The cat sat on the [BLANK]’). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it’s still a computational guessing game. ChatGPT is, in technical terms, a ‘bullshit generator’.
By Dan McQuillan for Dan McQuillan on February 6, 2023
Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words. We demonstrate that word embeddings can be used as a powerful tool to quantify historical trends and social change. As specific applications, we develop metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Our framework opens up a fruitful intersection between machine learning and quantitative social science.
By Dan Jurafsky, James Zou, Londa Schiebinger and Nikhil Garg for PNAS on April 3, 2018
Nakeema Stefflbauer is bringing women from underrepresented backgrounds into the Berlin tech scene.
By Gouri Sharma and Nakeema Stefflbauer for MIT Technology Review on February 21, 2023
In een tijd waarin data en algoritmes ons gedrag sturen moeten we gezamenlijk op zoek naar een nieuwe rechtsopvatting, schrijft Maxim Februari.
By Maxim Februari for NRC on February 17, 2023
The past week the Dutch goverment hosted and organised the military AI conference REAIM 2023. Together with eight other NGOs we signed an open letter, initated by Oumaima Hajri, that calls on the Dutch government to stop promoting narratives of “innovation” and “opportunities” but, rather, centre the very real and often disparate human impact.
Continue reading “An alliance against military AI”Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.
Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.
Continue reading “Quantifying bias in society with ChatGTP-like tools”This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.
Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”Starting February 18th, 2023.
Continue reading “Events, exhibits and other things to do”Civil society organisations urge the Dutch government to immediately establish a moratorium on developing AI systems in the military domain.
By Oumaima Hajri for Alliantie tegen militaire AI on February 15, 2023
The secret services’ reign of confusion, rogue mayors and algorithm oversight (or not): a quick read through the most interesting developments at the intersection of human rights and technology from the Netherlands.
By Evelyn Austin for Bits of Freedom on February 2, 2023
The side effects of sleep deprivation are wreaking havoc on daytime life.
By Audrey Simango and Ray Mwareya for Rest of World on February 7, 2023
Laat ik de vraag specificeren voor mijn vakgebied: zijn besluiten die door technologie worden genomen rechtvaardig? Verdien je de beslissing die uit de machine rolt?
Continue reading “Uit Vrij Nederland: Krijgen we wat we verdienen?”A profound exploration of how the ceaseless extraction of information about our intimate lives is remaking both global markets and our very selves. The Costs of Connection represents an enormous step forward in our collective understanding of capitalism’s current stage, a stage in which the final colonial input is the raw data of human life. Challenging, urgent and bracingly original.
By Nick Couldry and Ulises A. Mejias for Colonized by Data
Je zult de populaire chatbot ChatGPT niet snel betrappen op vieze woordjes of racistische taal. Hij is keurig getraind door tientallen Kenianen. Hun taak: het algoritme leren vooral niet te beginnen over moord, marteling en verkrachting, zodat wij – de gebruikers – geen smerige drek voorgeschoteld krijgen.
By Maurits Martijn for De Correspondent on January 28, 2023
OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
The immanent demise of Twitter after Elon Musk’s takeover sparked an exodus of people leaving the platform, which is only expected to increase. The significant increase in hate speech, and general hostile atmosphere created by the erratic decrees by it’s owner (such as Trump’s reinstatement) made, in the New Yorker writer Jelani Cobb’s words, “remaining completely untenable”. This, often vocal, movement of people from the platform has sparked a debate on what people stand to loose and what the alternative is.
Continue reading “What’s at stake with losing (Black) Twitter and moving to (white) Mastodon?”Het einde van 2022 stond in het teken van de AI-tools. Je maakt digitale kunstwerken met DALL-E, AI-profielfoto’s met Lensa en als klap op de vuurpijl genereer je binnen een paar seconden een hele sollicitatiebrief of essay via ChatGPT. Dat AI, of kunstmatige intelligentie, veel kan wisten we. Maar ChatGPT wordt echt gezien als een doorbraak. Wat is het? En worden wij overbodig door AI? Oh en Devran dacht trouwens lekker ontspannen het nieuwe jaar in te gaan met de chatbot, maar of dat nou zo’n goed idee was…
By Robin Pocornie for YouTube on December 31, 2022
In a roundtable on artificial intelligence in the Dutch Parliament, Quirine Eijkman spoke on behalf of the Netherlands Institute for Human Rights about Robin Pocornie’s case against the discriminatory use of Proctiorio at the VU university.
Continue reading “Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament”Tiera Tanksley’s work seeks to better understand how forms of digitally mediated traumas, such as seeing images of Black people dead and dying on social media, are impacting Black girls’ mental and emotional wellness in the U.S. and Canada. Her fears were confirmed in her findings: Black girls report unprecedented levels of fear, depression, anxiety and chronic stress. Viewing Black people being killed by the state was deeply traumatic, with mental, emotional and physiological effects.
Continue reading “Profiting off Black bodies”Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”Unsurprisingly, the artistic and ethical shortcomings of AI image generators are tied to their dependence on capital and capitalism.
By Marco Donnarumma for Hyperallergic on October 24, 2022
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
By Melissa Heikkilä for MIT Technology Review on December 12, 2022
Two Black academics discuss the rationale behind leaving Twitter or going down with the ship.
By Chris Gilliard and Kishonna Gray for WIRED on December 13, 2022
When large language models fall short, the consequences can be serious. Why is it so hard to acknowledge that?
By Abeba Birhane and Deborah Raji for WIRED on December 9, 2022
This is according to experts at the University of Cambridge, who suggest that current portrayals and stereotypes about AI risk creating a “racially homogenous”.
By Kanta Dihal and Stephen Cave for University of Cambridge on August 6, 2020
I’m sure you’ve seen the tweets, and the think pieces about how much worse Twitter is gonna get. My friend Justin Hendrix mentioned losing a few hundred followers in a case of a few hours, after Elon brought a sink into Twitter headquarters (which is the lamest bit I’ve ever seen- massive fail of a dad joke). A huge chunk of people I follow now have their Mastodon handles in their Twitter names. It’s a chunk of the influencers, academics, activists, and civil society folks- the researchers who I follow, who are actively mourning, and hand wringing, about the destruction that is to come, already in the throes of grief of the twitter that was. But the thing is- all of these folks are white.
By Caroline Sinders for Medium on October 31, 2022
Concerned about how seeing images of Black people dead and dying would affect young social media users, I conducted a study to understand how digitally mediated traumas were impacting Black girls’ mental and emotional wellness.
By Tiera Tanksley for SAGE Perspectives on January 4, 2023
Social media app, Spill, designed by former Twitter employees, Alphonzo “Phonz” Terrell and DeVaris Brown, is becoming the chosen alternative for many.
By Kumba Kpakima for POCIT on December 21, 2022
We show that statistics have human consequences.
From Human Rights Data Analysis Group
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.