The journalist and academic says the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal.
By Meredith Broussard and Zoë Corbyn for The Guardian on March 26, 2023
The journalist and academic says the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal.
By Meredith Broussard and Zoë Corbyn for The Guardian on March 26, 2023
Hoe dragen algoritmen bij aan racisme? En wat zijn de gevolgen? Die vragen kwamen aan bod tijdens een paneldiscussie woensdagmiddag op Science Park. ‘We moeten een “safe space” creëren waarin bedrijven transparant durven te zijn zonder gelijk afgestraft te worden.’
By Sija van den Beukel for Folia on March 16, 2023
You are not a parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.
By Elizabeth Weil and Emily M. Bender for New York Magazine on March 1, 2023
The current wave of reporting on the AI-bubble has one advantage: it also creates a bit of space in the media to write about how AI reflects the existing inequities in our society.
Continue reading “Work related to the Racism and Technology Center is getting media attention”As part of a series of investigative reporting by Lighthouse Reports and WIRED, Gabriel Geiger has revealed some of the findings about the use of welfare fraud algorithms in Denmark. This comes in the trajectory of the increasing use of algorithmic systems to detect welfare fraud across European cities, or at least systems which are currently known.
Continue reading “Denmark’s welfare fraud system reflects a deeply racist and exclusionary society”The algorithm that the city of Rotterdam used to predict the risk of welfare fraud fell into the hands of journalists. Turns out that the system was biased against marginalised groups like young mothers and people who don’t have Dutch as their first language.
Continue reading “Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased”An investigation by The Markup found racial disparities in L.A.’s intake system for unhoused people.
By Colin Lecher and Maddy Varner for The Markup on February 28, 2023
Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.
By Dhruv Mehrotra, Eva Constantaras, Gabriel Geiger, Htet Aung and Justin-Casimir Braun for WIRED on March 6, 2023
Een algoritme waarmee de gemeente Rotterdam jarenlang bijstandsfraude voorspelde, rekende jonge moeders en mensen die slecht Nederlands spreken tot de hoogste risicogroepen. Zij hadden de grootste kans op een strenge controle door de gemeente. Dat blijkt uit onderzoek van Lighthouse Reports, Argos, Vers Beton en Follow the Money, waarbij journalisten voor het eerst een compleet fraudealgoritme in handen kregen.
By David Davidson and Tom Claessens for Follow the Money on March 6, 2023
Mass profiling system SyRI resurfaces in the Netherlands despite ban and landmark court ruling.
By Allart van der Woude, Daniel Howden, David Davidson, Evaline Schot, Gabriel Geiger, Judith Konijn, Ludo Hekman, Marc Hijink, May Bulman and Saskia Adriaens for Lighthouse Reports on December 20, 2022
Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words. We demonstrate that word embeddings can be used as a powerful tool to quantify historical trends and social change. As specific applications, we develop metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Our framework opens up a fruitful intersection between machine learning and quantitative social science.
By Dan Jurafsky, James Zou, Londa Schiebinger and Nikhil Garg for PNAS on April 3, 2018
ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.
Continue reading “Quantifying bias in society with ChatGTP-like tools”Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
By Melissa Heikkilä for MIT Technology Review on December 12, 2022
In the past few years, Black people have been inundated with the ways that technology hates us just as much as white supremacy does (then again, technology is an extension and weapon of white supremacy in many ways).
By Lelia Hampton for Lelia Hampton on December 13, 2020
De overheid voorspelt na het verbod op het ‘sleepnet’ SyRI nog altijd fraude op adressen in sociaal-economisch zwakkere wijken. Argos en Lighthouse Reports deden onderzoek naar de methode, waarbij gemeenten en instanties als Belastingdienst, UWV en politie risicosignalen delen. ‘Dit gaat over een overheid die zoveel van je afweet dat die altijd iets kan vinden.’
By David Davidson and Saskia Adriaens for VPRO on December 20, 2022
Among global movements to reckon with police powers, a new report from UK research group No Tech For Tyrants unveils how police use surveillance technology to abuse power around the world.
From No Tech for Tyrants on November 7, 2022
Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”
Continue reading “The devastating consequences of risk based profiling by the Dutch police”To administer bar exams in 20 different states next week, ExamSoft is using facial recognition and collecting the biometric data of legal professionals.
By Khari Johnson for VentureBeat on September 29, 2020
Er moet een moratorium komen op het gebruik van algoritmes bij risicoprofilering, vindt Samira Rafaela, Europarlementariër van D66.
By Samira Rafaela for Binnenlands Bestuur on October 10, 2022
Je kunt al snel denken dat kunstmatige intelligentie alleen maar iets is om voor op te passen. Een machtig wapen in handen van de overheid of van techbedrijven die zich schuldig maken aan privacyschending, discriminatie of onterechte straffen. Maar we kunnen met algoritmen juist problemen oplossen en werken aan een rechtvaardiger wereld, zegt informaticus Sennay Ghebreab van het Civic AI Lab tegen Kustaw Bessems. Dan moeten we wel de basis een beetje snappen én er meer over te zeggen hebben.
By Kustaw Bessems and Sennay Ghebreab for Volkskrant on September 11, 2022
A Silicon Valley startup offers voice-altering tech to call center workers around the world: ‘Yes, this is wrong … but a lot of things exist in the world’
By Wilfred Chan for The Guardian on August 24, 2022
Understanding causes, recognizing cases, supporting those affected: documents for implementing a workshop.
By Waldemar Kesler for AlgorithmWatch on September 7, 2022
A recent study in robotics has drawn attention from news media such as The Washington Post and VICE. In this study, researchers programmed virtual robots with popular artificial intelligence algorithms. Then, these robots were asked to scan blocks containing pictures of people’s faces and make decisions to put some blocks into a virtual “box” according to an open-ended instruction. In the experiments, researchers quickly found out that these robots repeatedly picked women and people of color to be put in the “box” when they were asked to respond to words such as “criminal”, “homemaker”, and “janitor”. The behaviors of these robots showed that sexist and racist baises coded in AI algorithms have leaked into the field of robotics.
Continue reading “AI-trained robots bring algorithmic biases into robotics”One of the classic examples of how AI systems can reinforce social injustice is Amazon’s A.I. hiring tool. In 2014, Amazon built an ´A.I. powered´ tool to assess resumes and recommend the top candidates that would go on to be interviewed. However, the tool turned out to be very biased, systematically preferring men over women.
Continue reading “Racist Technology in Action: How hiring tools can be sexist and racist”During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.
Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”Een student van de Vrije Universiteit Amsterdam (VU) dient een klacht in bij het College voor de Rechten van de Mens (pdf). Bij het gebruik van de antispieksoftware voor tentamens werd ze alleen herkend als ze met een lamp in haar gezicht scheen. De VU had volgens haar vooraf moeten controleren of studenten met een zwarte huidskleur even goed herkend zouden worden als witte studenten.
From NU.nl on July 15, 2022
Student Robin Pocornie moest tijdens de coronapandemie tentamens maken met een lamp direct op haar gezicht. Haar witte medestudenten hoefden dat niet. De surveillance-software van de VU heeft haar gediscrimineerd, daarom dient ze vandaag een klacht in bij het College voor de Rechten van de Mens.
Continue reading “Student stapt naar College voor de Rechten van de Mens vanwege gebruik racistische software door de VU”In his New York Times article, Mike Isaac describes how Meta is implementing a new system to automatically check whether the housing, employment and credit ads it hosts are shown to people equally. This is a move following a 111,054 US dollar fine the US Justice Department has issued Meta because its ad systems have been shown to discriminate its users by, amongst other things, excluding black people from seeing certain housing ads in predominately white neighbourhoods. This is the outcome of a long process, which we have written about previously.
Continue reading “Meta forced to change its advertisement algorithm to address algorithmic discrimination”What is algorithmic discrimination, how is it caused and what can be done about it? These are the questions that are addressed in AlgorithmWatch’s newly published report Automated Decision-Making Systems and Discrimination.
Continue reading “A guidebook on how to combat algorithmic discrimination”An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.
Continue reading “Racist Technology in Action: Turning a Black person, White”The Justice Department had accused Meta’s housing advertising system of discriminating against Facebook users based on their race, gender, religion and other characteristics.
By Mike Isaac for The New York Times on June 21, 2022
The images represent a glitch in the system that even its creator can’t explain.
By Nilesh Christopher for Rest of World on June 22, 2022
We are faced with automated decision-making systems almost every day and they might be discriminating, without us even knowing about it. A new guidebook helps to better recognize such cases and support those affected.
From AlgorithmWatch on June 21, 2022
In de 2-delige podcast ‘Moslima’ gaan Cigdem Yuksel en Maartje Duin op zoek naar de oorsprong van het standaardbeeld van ‘de moslima’.
By Cigdem Yuksel and Maartje Duin for VPRO on May 15, 2022
The fuss about a bot’s ‘consciousness’ obscures far more troubling concerns.
By Kenan Malik for The Guardian on June 19, 2022
The Algemene Rekenkamer (Netherlands Court of Audit) looked into nine different algorithms used by the Dutch state. It found that only three of them fulfilled the most basic of requirements.
Continue reading “Shocking report by the Algemene Rekenkamer: state algorithms are a shitshow”Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”
Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”Een verantwoorde inzet van algoritmes door uitvoeringsorganisaties van de rijksoverheid is mogelijk, maar in de praktijk niet altijd het geval. De Algemene Rekenkamer heeft bij 3 algoritmes vastgesteld dat deze voldoen aan alle basisvereisten. Bij 6 andere bestaan uiteenlopende risico’s: gebrekkige controle op prestaties of effecten, vooringenomenheid, datalek of ongeautoriseerde toegang.
From Algemene Rekenkamer on May 18, 2022
Author Tarcízio Silva on how algorithmic racism exposes the myth of “racial democracy.”
By Alex González Ormerod and Tarcízio Silva for Rest of World on April 22, 2022
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.