How and why algorithms discriminate

Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.

By Pie Sombetzki for AlgorithmWatch on June 26, 2024

Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts

In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.

Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”

Racist Technology in Action: MyLife.com and discriminatory predation

MyLife.com is one of those immoral American companies that collect personal information to sell onwards as profiles on the one hand, while at the same suggesting to the people that are being profiled that incriminating information about them exists online that they can get removed by buying a subscription (that then does nothing and auto-renews in perpetuity).

Continue reading “Racist Technology in Action: MyLife.com and discriminatory predation”

Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department

Last year, Investico revealed how DUO, the Dutch organization for administering student grants, was using a racist algorithm to decide which students would get a home visit to check for fraudulent behaviour. The Minister of Education immediately stopped the use of the algorithm.

Continue reading “Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department”

Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite

For more than a year now, the Dutch Ministry of Foreign Affairs has ignored advice from its experts and continued its use of discriminatory risk profiling of visa applicants.

Continue reading “Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite”

Vervolgonderzoek bevestigt indirecte discriminatie controles uitwonendenbeurs

DUO heeft de onafhankelijke stichting Algorithm Audit vervolgonderzoek laten doen naar de manier waarop DUO tussen 2012 en 2023 controleerde of een student terecht studiefinanciering ontving voor uitwonende studenten of niet. De conclusies van het vervolgonderzoek bevestigen dat studenten met een migratieachtergrond hierbij indirect zijn gediscrimineerd.

From Dienst Uitvoering Onderwijs (DUO) on May 21, 2024

Fouten herstellen we later wel: hoe de gemeente een dubieus algoritme losliet op Rotterdammers

Het was te mooi om waar te zijn: een algoritme om fraude in de bijstand op te sporen. Ondanks waarschuwingen bleef de gemeente Rotterdam er bijna vier jaar lang in geloven. Een handjevol ambtenaren, zich onvoldoende bewust van ethische risico’s, kon jarenlang ongestoord experimenteren met de data van kwetsbare mensen.

By Romy van Dijk and Saskia Klaassen for Vers Beton on October 23, 2023

Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders

In 2020, two NGOs finally forced the UK Home Office’s hand, compelling it to abandon its secretive and racist algorithm for sorting visitor visa applications. Foxglove and The Joint Council for the Welfare of Immigrants (JCWI) had been battling the algorithm for years, arguing that it is a form of institutionalized racism and calling it “speedy boarding for white people.”

Continue reading “Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders”

Borders and Bytes

So-called “smart” borders are just more sophisticated sites of racialized surveillance and violence. We need abolitionist tools to counter them.

By Ruha Benjamin for Inquest on February 13, 2024

Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents

Investigative reporting by The Markup showed how U.S. internet providers offer wildly different internet speeds for the same monthly fee. The neighbourhoods with the worst deals had lower median incomes and were very often the least White.

Continue reading “Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents”

Belastingdienst blijft wet overtreden met mogelijk discriminerende fraude-algoritmen

Na het toeslagenschandaal kreeg de Belastingdienst het advies om drie mogelijk discriminerende fraude-algoritmen onmiddellijk stop te zetten. Toch besloot de fiscus ermee door te gaan: het organisatiebelang woog zwaarder dan naleving van de wet en bescherming van grondrechten. Dat blijkt uit documenten die twee jaar nadat om openbaring was verzocht aan Follow the Money zijn vrijgegeven. ‘Onbegrijpelijk en verbijsterend.’

By David Davidson and Sebastiaan Brommersma for Follow the Money on December 14, 2023

Not a solution: Meta’s new AI system to contain discriminatory ads

Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.

By John Albert for AlgorithmWatch on November 17, 2023

AI is nog lang geen wondermiddel – zeker niet in het ziekenhuis

Tumoren ontdekken, nieuwe medicijnen ontwikkelen – beloftes genoeg over wat kunstmatige intelligentie kan betekenen voor de medische wereld. Maar voordat je zulk belangrijk werk kunt overlaten aan technologie, moet je precies snappen hoe die werkt. En zover zijn we nog lang niet.

By Maurits Martijn for De Correspondent on November 6, 2023

Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?

In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.

Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”

Technologie raakt sommige groepen mensen in onze samenleving harder dan anderen (en dat zou niet zo mogen zijn)

Bij het gebruik van technologie worden onze maatschappelijke problemen gereflecteerd en soms verergerd. Die maatschappelijke problemen kennen een lange geschiedenis van oneerlijke machtsstructuren, racisme, seksisme en andere vormen van discriminatie. Wij zien het als onze taak om die oneerlijke structuren te herkennen en ons daartegen te verzetten.

By Evely Austin, Ilja Schurink and Nadia Benaissa for Bits of Freedom on September 12, 2023

Dubieus algoritme van de politie ‘voorspelt’ wie in de toekomst geweld zal plegen

De politie voorspelt al sinds 2015 met een algoritme wie er in de toekomst geweld zal plegen. Van Marokkaanse en Antilliaanse Nederlanders werd die kans vanwege hun achtergrond groter geschat. Dat gebeurt nu volgens de politie niet meer, maar daarmee zijn de gevaren van het model niet opgelost. ‘Aan dit algoritme zitten enorme risico’s.’

By David Davidson and Marc Schuilenburg for Follow the Money on August 23, 2023

Dutch police used algorithm to predict violent behaviour without any safeguards

For many years the Dutch police has used a risk modeling algorithm to predict the chance that an individual suspect will commit a violent crime. Follow the Money exposed the total lack of a moral, legal, and statistical justification for its use, and now the police has stopped using the system.

Continue reading “Dutch police used algorithm to predict violent behaviour without any safeguards”

Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security

A system funded by the World Bank to assess who is most in need of support, is reported to not only be faulty but also discriminatory and depriving many of their right to social security. In a recent report titled “Automated Neglect: How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights” Human Rights Watch outlines how specifically the system used in Joran should be abandoned.

Continue reading “Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security”

Women of colour are leading the charge against racist AI

In this Dutch-language piece for De Groene Amsterdammer, Marieke Rotman offers an accessible introduction of the main voices, both internationally and in the Netherlands, tirelessly fighting against racism and discrimination in AI-systems. Not coincidentally, most of the people doing this labour are women of colour. The piece guides you through their impressive work and leading perspectives on the dynamics of racism and technology.

Continue reading “Women of colour are leading the charge against racist AI”

Racist Technology in Action: How Pokéman Go inherited existing racial inequities

When Aura Bogado was playing Pokémon Go in a much Whiter neighbourhood than the one where she lived, she noticed how many more PokéStops were suddenly available. She then crowdsourced locations of these stops and found out, with the Urban Institute think tank, that there were on average 55 PokéStops in majority White neighbourhoods and 19 in neighbourhoods that were majority Black.

Continue reading “Racist Technology in Action: How Pokéman Go inherited existing racial inequities”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑