Moslima

In de 2-delige podcast ‘Moslima’ gaan Cigdem Yuksel en Maartje Duin op zoek naar de oorsprong van het standaardbeeld van ‘de moslima’.

By Cigdem Yuksel and Maartje Duin for VPRO on May 15, 2022

Racist Techology in Action: Beauty is in the eye of the AI

Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”

Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”

Diverse algoritmes Rijk voldoen niet aan basisvereisten

Een verantwoorde inzet van algoritmes door uitvoeringsorganisaties van de rijksoverheid is mogelijk, maar in de praktijk niet altijd het geval. De Algemene Rekenkamer heeft bij 3 algoritmes vastgesteld dat deze voldoen aan alle basisvereisten. Bij 6 andere bestaan uiteenlopende risico’s: gebrekkige controle op prestaties of effecten, vooringenomenheid, datalek of ongeautoriseerde toegang.

From Algemene Rekenkamer on May 18, 2022

Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms

In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.

Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”

De discriminatie die in data schuilt

De Eerste Kamer doet onderzoek naar de effectiviteit van wetgeving tegen discriminatie. Wij mochten afgelopen vrijdag de parlementsleden vertellen over discriminatie en algoritmen. Hieronder volgt de kern van ons verhaal.

By Nadia Benaissa for Bits of Freedom on February 8, 2022

Costly birthplace: discriminating insurance practice

Two residents in Rome with exactly the same driving history, car, age, profession, and number of years owning a driving license may be charged a different price when purchasing car insurance. Why? Because of their place of birth, according to a recent study.

By Francesco Boscarol for AlgorithmWatch on February 4, 2022

Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success

An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.

Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”

Reference man

Ontmoet Reference man: een witte man, zo’n 1.75m lang en ongeveer 80 kilo. En onze wereld is afgestemd, getest en gebouwd op hem. Dat is soms bijna knullig, maar af en toe ook levensbedreigend. In dit vierluik neemt Sophie Frankenmolen de kijker mee in haar onderzoek naar dit bizarre fenomeen.

By Sophie Frankenmolen for NPO Start on January 13, 2022

Predictive policing reinforces and accelerates racial bias

The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.

Continue reading “Predictive policing reinforces and accelerates racial bias”

Shirley Cards

Photographer Ibarionex Perello recalls how school picture day would go back in the 1970s at the Catholic school he attended in South Los Angeles. He recalls that kids would file into the school auditorium in matching uniforms. They’d sit on a stool, the photographer would snap a couple images, and that would be it. But when the pictures came back weeks later, Perello always noticed that the kids with lighter skin tones looked better — or at least more like themselves. Those with darker skin tones looked to be hidden in shadows. That experience stuck with him, but he didn’t realize why this was happening until later in his life.

From 99% Invisible on November 8, 2021

Discriminating Data

How big data and machine learning encode discrimination and create agitated clusters of comforting rage.

By Wendy Hui Kyong Chun for The MIT Press on November 1, 2021

Amnesty’s grim warning against another ‘Toeslagenaffaire’

In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.

Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”

Racist Technology in Action: Facebook labels black men as ‘primates’

In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.

Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”

Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal

Social security enforcement agencies worldwide are increasingly automating their processes in the hope of detecting fraud. The Netherlands is at the forefront of this development. The Dutch tax authorities adopted an algorithmic decision-making system to create risk profiles of individuals applying for childcare benefits in order to detect inaccurate and potentially fraudulent applications at an early stage. Nationality was one of the risk factors used by the tax authorities to assess the risk of inaccuracy and/or fraud in the applications submitted. This report illustrates how the use of individuals’ nationality resulted in discrimination as well as racial profiling.

From Amnesty International on October 25, 2021

Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’

Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.

By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019

Dr Nakeema Stefflbauer: ‘#defundbias in online hiring and listen to the people in Europe whom AI algorithms harm’

The first time I applied to work at a European company, my interviewer verbally grilled me about my ethnic origin. “Is your family from Egypt? Morocco? Are you Muslim?” asked a white Belgian man looking for a project manager. He was the CEO. My CV at the time was US-style, without a photograph, but with descriptions of research I had conducted at various Middle East and North African universities. I’d listed my nationality and my BA, MA, and PhD degrees, which confirmed my Ivy League graduate status several times over. “Are either of your parents Middle Eastern?” the CEO persisted.

By Nakeema Stefflbauer for Who Writes The Rules on August 23, 2021

Racist Technology in Action: White preference in mortage-approval algorithms

A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”

Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”

If AI is the problem, is debiasing the solution?

The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.

By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021

We leven helaas nog steeds in een ­wereld waarin huidskleur een probleem is

Papa, mag ik die huidskleur?’ Verbaasd keek ik op van de kleurplaat die ik aan het inkleuren was, om mijn dochter te zien wijzen naar een stift met een perzikachtige kleur. Of misschien had die meer de kleur van een abrikoos. Afijn, de stift had in ieder geval niet háár huidskleur. Mijn dochter mag dan wel twee tinten lichter van kleur zijn dan ik, toch is zij overduidelijk bruin.

By Ilyaz Nasrulla for Trouw on September 23, 2021

Government: Stop using discriminatory algorithms

In her Volkskrant opinion piece Nani Jansen Reventlow makes a forceful argument for the government to stop using algorithms that lead to discrimination and exclusion. Reventlow, director of the Digital Freedom Fund, employs a myriad of examples to show how disregarding the social nature of technological systems can lead to reproducing existing social injustices such as racism or discrimination. The automatic fraud detection system SyRI that was ruled in violation of fundamental rights (and its dangerous successor Super SyRI) is discussed, as well as the racist proctoring software we wrote about earlier.

Continue reading “Government: Stop using discriminatory algorithms”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑