Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move

In a shallow attempt to do representation for representation’s sake, Google has managed to draw the ire of the right-wing internet by generating historically inaccurate and overly inclusive portraits of historical figures.

Continue reading “Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move”

Dutch Higher Education continues to use inequitable proctoring software

In October last year, RTL news showed that Proctorio’s software, used to check if students aren’t cheating during online exams, works less for students of colour. Five months later, RTL asked the twelve Dutch educational institutions on Proctorio’s client list whether they were still using the tool. Eight say they still do.

Continue reading “Dutch Higher Education continues to use inequitable proctoring software”

Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents

Investigative reporting by The Markup showed how U.S. internet providers offer wildly different internet speeds for the same monthly fee. The neighbourhoods with the worst deals had lower median incomes and were very often the least White.

Continue reading “Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents”

Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination

On October 17th, the Netherlands Institute for Human Rights ruled that the VU did not discriminate against bioinformatics student Robin Pocornie on the basis of race by using anti-cheating software. However, according to the institute, the VU has discriminated on the grounds of race in how they handled her complaint.

Continue reading “Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination”

Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score

Respondus, a vendor of online proctoring software, has been granted a patent for their “systems and methods for assessing data collected by automated proctoring.” The patent shows that their example method for calculating a risk score is adjusted on the basis of people’s skin colour.

Continue reading “Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score”

Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?

In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.

Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”

Dutch police used algorithm to predict violent behaviour without any safeguards

For many years the Dutch police has used a risk modeling algorithm to predict the chance that an individual suspect will commit a violent crime. Follow the Money exposed the total lack of a moral, legal, and statistical justification for its use, and now the police has stopped using the system.

Continue reading “Dutch police used algorithm to predict violent behaviour without any safeguards”

Racist Technology in Action: How Pokéman Go inherited existing racial inequities

When Aura Bogado was playing Pokémon Go in a much Whiter neighbourhood than the one where she lived, she noticed how many more PokéStops were suddenly available. She then crowdsourced locations of these stops and found out, with the Urban Institute think tank, that there were on average 55 PokéStops in majority White neighbourhoods and 19 in neighbourhoods that were majority Black.

Continue reading “Racist Technology in Action: How Pokéman Go inherited existing racial inequities”

Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied

Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.

Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”

Quantifying bias in society with ChatGTP-like tools

ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.

Continue reading “Quantifying bias in society with ChatGTP-like tools”

Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)

Dutch student Robin Pocornie filed a complaint with Dutch Institute for Human Rights. The surveillance software that her university used, had trouble recognising her as human being because of her skin colour. After a hearing, the Institute has now ruled that Robin has presented enough evidence to assume that she was indeed discriminated against. The ball is now in the court of the VU (her university) to prove that the software treated everybody the same.

Continue reading “Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑