Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score

Respondus, a vendor of online proctoring software, has been granted a patent for their “systems and methods for assessing data collected by automated proctoring.” The patent shows that their example method for calculating a risk score is adjusted on the basis of people’s skin colour.

Continue reading “Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score”

Intuit: “Our fraud fights racism”

Today’s key concept is “predatory inclusion”: “a process wherein lenders and financial actors offer needed services to Black households but on exploitative terms that limit or eliminate their long-term benefits”.

By Cory Doctorow for Pluralistic on September 27, 2023

Use of machine translation tools exposes already vulnerable asylum seekers to even more risks

The use of and reliance on machine translation tools in asylum seeking procedures has become increasingly common amongst government contractors and organisations working with refugees and migrants. This Guardian article highlights many of the issues documented by Respond Crisis Translation, a network of people who provide urgent interpretation services for migrants and refugees. The problems with machine translation tools occur throughout the asylum process, from border stations to detention centers to immigration courts.

Continue reading “Use of machine translation tools exposes already vulnerable asylum seekers to even more risks”

Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?

In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.

Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”

Does AI perpetuate human bias?

AI bias is not new. Rather it is a problem that is escalating as newer AI technologies are being deployed in various parts of our lives. Who does AI discriminate against and why? Dutch student Robin Pocornie tells us why she submitted a claim against her university using an AI exam supervision system. New York-based data reporter Lam Thuy Vo points out insufficient and inadequate datasets that AI is trained on, while Berlin-based tech expert Nakeema Stefflbauer talks about systemic biases entrenched into AI design. When talking about AI chatbots like ChatGPT, Vanderbilt University’s Jules White argues that bias is rather brought out by users themselves. And with the co-founder of Racism and Technology Center in Amsterdam, Naomi Appelman, we discuss the idea of technological objectivity that persists in our society.

By Jules White, Lam Thuy Vo, Nakeema Stefflbauer, Naomi Appelman, Robin Pocornie and Samantha Johnson for YouTube on September 26, 2023

­Data Work and its Layers of (In)visibility

No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.

By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023

Technologie raakt sommige groepen mensen in onze samenleving harder dan anderen (en dat zou niet zo mogen zijn)

Bij het gebruik van technologie worden onze maatschappelijke problemen gereflecteerd en soms verergerd. Die maatschappelijke problemen kennen een lange geschiedenis van oneerlijke machtsstructuren, racisme, seksisme en andere vormen van discriminatie. Wij zien het als onze taak om die oneerlijke structuren te herkennen en ons daartegen te verzetten.

By Evely Austin, Ilja Schurink and Nadia Benaissa for Bits of Freedom on September 12, 2023

Verdacht omdat je op een ‘verwonderadres’ woont: ‘Ze bleven aandringen dat ik moest opendoen’

Fraudejagers van de overheid die samenwerken onder de vlag van de Landelijke Stuurgroep Interventieteams selecteren overal in het land ‘verwonderadressen’ waar bewoners misschien wel frauderen. Uit een reconstructie blijkt hoe een familie in Veenendaal in beeld kwam en op drie adressen controleurs aan de deur kreeg. ‘We hoorden van de buren dat ze vanuit de bosjes ons huis in de gaten hielden.’

By David Davidson for Follow the Money on September 6, 2023

Dubieus algoritme van de politie ‘voorspelt’ wie in de toekomst geweld zal plegen

De politie voorspelt al sinds 2015 met een algoritme wie er in de toekomst geweld zal plegen. Van Marokkaanse en Antilliaanse Nederlanders werd die kans vanwege hun achtergrond groter geschat. Dat gebeurt nu volgens de politie niet meer, maar daarmee zijn de gevaren van het model niet opgelost. ‘Aan dit algoritme zitten enorme risico’s.’

By David Davidson and Marc Schuilenburg for Follow the Money on August 23, 2023

Dutch police used algorithm to predict violent behaviour without any safeguards

For many years the Dutch police has used a risk modeling algorithm to predict the chance that an individual suspect will commit a violent crime. Follow the Money exposed the total lack of a moral, legal, and statistical justification for its use, and now the police has stopped using the system.

Continue reading “Dutch police used algorithm to predict violent behaviour without any safeguards”

Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security

A system funded by the World Bank to assess who is most in need of support, is reported to not only be faulty but also discriminatory and depriving many of their right to social security. In a recent report titled “Automated Neglect: How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights” Human Rights Watch outlines how specifically the system used in Joran should be abandoned.

Continue reading “Racist Technology in Action: The World Bank’s Poverty Targeting Algorithms Deprives People of Social Security”

Met het Oog op Morgen: Gezichtsherkenning herkent zwarte vrouw niet

Een opmerkelijke zaak in de Verenigde Staten: een vrouw wordt gearresteerd voor beroving en diefstal van een auto. Maar de vrouw is hoogzwanger en heeft het helemaal niet gedaan. Zij komt in beeld omdat een computer haar door gezichtsherkenning eruit pikt. Zij wordt in bijzijn van haar kinderen in de boeien geslagen. Later blijkt dus: zij was het niet.

By Naomi Appelman and Rob Trip for NPO Radio 1 on August 9, 2023

Women of colour are leading the charge against racist AI

In this Dutch-language piece for De Groene Amsterdammer, Marieke Rotman offers an accessible introduction of the main voices, both internationally and in the Netherlands, tirelessly fighting against racism and discrimination in AI-systems. Not coincidentally, most of the people doing this labour are women of colour. The piece guides you through their impressive work and leading perspectives on the dynamics of racism and technology.

Continue reading “Women of colour are leading the charge against racist AI”

Racist Technology in Action: How Pokéman Go inherited existing racial inequities

When Aura Bogado was playing Pokémon Go in a much Whiter neighbourhood than the one where she lived, she noticed how many more PokéStops were suddenly available. She then crowdsourced locations of these stops and found out, with the Urban Institute think tank, that there were on average 55 PokéStops in majority White neighbourhoods and 19 in neighbourhoods that were majority Black.

Continue reading “Racist Technology in Action: How Pokéman Go inherited existing racial inequities”

Vooral vrouwen van kleur klagen de vooroordelen van AI aan

Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.

By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023

France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?

Many governments are using mass surveillance to support law enforcement for the purposes of safety and security. In France, the French Parliament (and before, the French Senate) have approved the use of automated behavioural video surveillance at the 2024 Paris Olympics. Simply put, France wants to legalise mass surveillance at the national level which can violate many rights, such as the freedom of assembly and association, privacy, and non-discrimination.

Continue reading “France wants to legalise mass surveillance for the Paris Olympics 2024: “Safety” and “security”, for whom?”

Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities

Bloomberg’s researchers used Stable Diffusion to gauge the magnitude of biases in generative AI. Through an analysis of more than 5,000 images created by Stable Diffusion, they have found that it takes racial and gender disparities to extremes. The results are worse than those found in the real world.

Continue reading “Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑