Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

By Dhruv Mehrotra, Eva Constantaras, Gabriel Geiger, Htet Aung and Justin-Casimir Braun for WIRED on March 6, 2023

The Algorithm Addiction

Mass profiling system SyRI resurfaces in the Netherlands despite ban and landmark court ruling.

By Allart van der Woude, Daniel Howden, David Davidson, Evaline Schot, Gabriel Geiger, Judith Konijn, Ludo Hekman, Marc Hijink, May Bulman and Saskia Adriaens for Lighthouse Reports on December 20, 2022

Word embeddings quantify 100 years of gender and ethnic stereotypes

Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words. We demonstrate that word embeddings can be used as a powerful tool to quantify historical trends and social change. As specific applications, we develop metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Our framework opens up a fruitful intersection between machine learning and quantitative social science.

By Dan Jurafsky, James Zou, Londa Schiebinger and Nikhil Garg for PNAS on April 3, 2018

Quantifying bias in society with ChatGTP-like tools

ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.

Continue reading “Quantifying bias in society with ChatGTP-like tools”

Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course

Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.

Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”

In arme wijken voorspelt de overheid nog altijd fraude

De overheid voorspelt na het verbod op het ‘sleepnet’ SyRI nog altijd fraude op adressen in sociaal-economisch zwakkere wijken. Argos en Lighthouse Reports deden onderzoek naar de methode, waarbij gemeenten en instanties als Belastingdienst, UWV en politie risicosignalen delen. ‘Dit gaat over een overheid die zoveel van je afweet dat die altijd iets kan vinden.’

By David Davidson and Saskia Adriaens for VPRO on December 20, 2022

The devastating consequences of risk based profiling by the Dutch police

Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”

Continue reading “The devastating consequences of risk based profiling by the Dutch police”

Met kunstmatige intelligentie kun je ook iets goeds doen.

Je kunt al snel denken dat kunstmatige intelligentie alleen maar iets is om voor op te passen. Een machtig wapen in handen van de overheid of van techbedrijven die zich schuldig maken aan privacyschending, discriminatie of onterechte straffen. Maar we kunnen met algoritmen juist problemen oplossen en werken aan een rechtvaardiger wereld, zegt informaticus Sennay Ghebreab van het Civic AI Lab tegen Kustaw Bessems. Dan moeten we wel de basis een beetje snappen én er meer over te zeggen hebben.

By Kustaw Bessems and Sennay Ghebreab for Volkskrant on September 11, 2022

AI-trained robots bring algorithmic biases into robotics

A recent study in robotics has drawn attention from news media such as The Washington Post and VICE. In this study, researchers programmed virtual robots with popular artificial intelligence algorithms. Then, these robots were asked to scan blocks containing pictures of people’s faces and make decisions to put some blocks into a virtual “box” according to an open-ended instruction. In the experiments, researchers quickly found out that these robots repeatedly picked women and people of color to be put in the “box” when they were asked to respond to words such as “criminal”, “homemaker”, and “janitor”. The behaviors of these robots showed that sexist and racist baises coded in AI algorithms have leaked into the field of robotics.

Continue reading “AI-trained robots bring algorithmic biases into robotics”

Racist Technology in Action: How hiring tools can be sexist and racist

One of the classic examples of how AI systems can reinforce social injustice is Amazon’s A.I. hiring tool. In 2014, Amazon built an ´A.I. powered´ tool to assess resumes and recommend the top candidates that would go on to be interviewed. However, the tool turned out to be very biased, systematically preferring men over women.

Continue reading “Racist Technology in Action: How hiring tools can be sexist and racist”

Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university

During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.

Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”

Student meldt discriminatie met antispieksoftware bij College Rechten van de Mens

Een student van de Vrije Universiteit Amsterdam (VU) dient een klacht in bij het College voor de Rechten van de Mens (pdf). Bij het gebruik van de antispieksoftware voor tentamens werd ze alleen herkend als ze met een lamp in haar gezicht scheen. De VU had volgens haar vooraf moeten controleren of studenten met een zwarte huidskleur even goed herkend zouden worden als witte studenten.

From NU.nl on July 15, 2022

Student stapt naar College voor de Rechten van de Mens vanwege gebruik racistische software door de VU

Student Robin Pocornie moest tijdens de coronapandemie tentamens maken met een lamp direct op haar gezicht. Haar witte medestudenten hoefden dat niet. De surveillance-software van de VU heeft haar gediscrimineerd, daarom dient ze vandaag een klacht in bij het College voor de Rechten van de Mens.

Continue reading “Student stapt naar College voor de Rechten van de Mens vanwege gebruik racistische software door de VU”

Meta forced to change its advertisement algorithm to address algorithmic discrimination

In his New York Times article, Mike Isaac describes how Meta is implementing a new system to automatically check whether the housing, employment and credit ads it hosts are shown to people equally. This is a move following a 111,054 US dollar fine the US Justice Department has issued Meta because its ad systems have been shown to discriminate its users by, amongst other things, excluding black people from seeing certain housing ads in predominately white neighbourhoods. This is the outcome of a long process, which we have written about previously.

Continue reading “Meta forced to change its advertisement algorithm to address algorithmic discrimination”

Racist Technology in Action: Turning a Black person, White

An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.

Continue reading “Racist Technology in Action: Turning a Black person, White”

Moslima

In de 2-delige podcast ‘Moslima’ gaan Cigdem Yuksel en Maartje Duin op zoek naar de oorsprong van het standaardbeeld van ‘de moslima’.

By Cigdem Yuksel and Maartje Duin for VPRO on May 15, 2022

Racist Techology in Action: Beauty is in the eye of the AI

Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”

Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”

Diverse algoritmes Rijk voldoen niet aan basisvereisten

Een verantwoorde inzet van algoritmes door uitvoeringsorganisaties van de rijksoverheid is mogelijk, maar in de praktijk niet altijd het geval. De Algemene Rekenkamer heeft bij 3 algoritmes vastgesteld dat deze voldoen aan alle basisvereisten. Bij 6 andere bestaan uiteenlopende risico’s: gebrekkige controle op prestaties of effecten, vooringenomenheid, datalek of ongeautoriseerde toegang.

From Algemene Rekenkamer on May 18, 2022

Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms

In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.

Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”

De discriminatie die in data schuilt

De Eerste Kamer doet onderzoek naar de effectiviteit van wetgeving tegen discriminatie. Wij mochten afgelopen vrijdag de parlementsleden vertellen over discriminatie en algoritmen. Hieronder volgt de kern van ons verhaal.

By Nadia Benaissa for Bits of Freedom on February 8, 2022

Costly birthplace: discriminating insurance practice

Two residents in Rome with exactly the same driving history, car, age, profession, and number of years owning a driving license may be charged a different price when purchasing car insurance. Why? Because of their place of birth, according to a recent study.

By Francesco Boscarol for AlgorithmWatch on February 4, 2022

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑