Standing in solidarity with the Palestinian people

We at the Racism and Technology Center stand in solidarity with the Palestinian people. We condemn the violence enacted against the innocent people in Palestine and Israel, and mourn alongside all who are dead, injured and still missing. Palestinian communities are being subjected to unlawful collective punishment in Gaza and the West Bank, including the ongoing bombings and the blockade of water, food and energy. We call for an end to the blockade and an immediate ceasefire.

Continue reading “Standing in solidarity with the Palestinian people”
Featured post

She had to catfish herself as a white woman to get a job: AI-mediated racism on LinkedIn and in recruiting

After being ghosted by numerous recruiters during her unemployment, Aliyah Jones, a Black woman, decided to create a LinkedIn ‘catfish’ account under the name Emily Osborne, a blonde-haired, blue-eyed white woman eager to advance her career in graphic design. The only difference between ‘Emily’ and Jones? Their names and skin colour. Their work experience and capabilities were the same.

Continue reading “She had to catfish herself as a white woman to get a job: AI-mediated racism on LinkedIn and in recruiting”

Microsoft will update its English (UK) dictionaries with a more inclusive name database

We wrote about the I am not a typo campaign before. They have shared good news with us: “Responding to feedback from customers and working with members of the I Am Not A Typo campaign, Microsoft has implemented product updates to ensure its dictionary better reflects the names of people living in modern, multicultural Britain, using official Office for National Statistics (ONS) baby name data as a guide.”

Continue reading “Microsoft will update its English (UK) dictionaries with a more inclusive name database”

Racist Technology in Action: The algorithm that was supposed to match asylum seekers to places with jobs doesn’t work and is discriminatory

For many years and for many people, GeoMatch by the Immigration Policy Lab was a shining example of ‘AI for Good’: instead of using algorithms to find criminals or fraud, why don’t we use it to allocate asylum seekers to regions that give them the most job opportunities? Only the naive can be surprised that this didn’t work out as promised.

Continue reading “Racist Technology in Action: The algorithm that was supposed to match asylum seekers to places with jobs doesn’t work and is discriminatory”

50% of the profiling algorithms used by the Dutch tax office are discriminatory and therefore unlawful according to the Data Protection Authority

The Dutch tax office is plagued by one problem after another. The Child Benefits Scandal was supposed to be a wake-up call, but apparently, their system use is so atrophied that they can’t seem to do what is needed. Follow the Money reports on a letter from the Dutch data protection authority to the Minister of Finance, which argues that the tax office uses 50 algorithms that use discriminatory profiling and are therefore potentially unlawful.

Continue reading “50% of the profiling algorithms used by the Dutch tax office are discriminatory and therefore unlawful according to the Data Protection Authority”

Racist Technology in Action: OpenAI’s Sora Launch: Yet another racist generative AI

The Guardian reports that OpenAI’s new AI video generator, Sora 2, launched with a social feed feature that allows users to share their generated videos on social media platforms easily. Predictably, within hours, violent and racist videos generated through Sora flooded these platforms. Despite OpenAI claiming to have implemented safeguards and mitigating measures, the app generated videos depicting mass shootings, bomb scares, and fabricated war footage from Gaza and Myanmar, showing AI-generated children.

Continue reading “Racist Technology in Action: OpenAI’s Sora Launch: Yet another racist generative AI”

Racist Technology in Action: Scientists show that TikTok is racist, sexist, and disgusting

Last June, researchers quantitatively proved the racist, misogynistic and appalling practices by TikTok. Comparing TikTok’s different metrics of the popular beauty filter, Bold Glamour, and cross-referencing the results with the social media company’s own “inclusivity policies”.

Continue reading “Racist Technology in Action: Scientists show that TikTok is racist, sexist, and disgusting”

Setting the record straight: Scientists show that the algorithm that Proctorio used is incredibly biased towards people with a darker skin colour

Do you remember when our Robin Pocornie filed a complaint with the Dutch Human Rights Institute because Proctorio, the spyware that she was forced to use as a proctor for doing her exams from home, couldn’t find her face as it was “too dark”? (If not, read the dossier of her case.)

Continue reading “Setting the record straight: Scientists show that the algorithm that Proctorio used is incredibly biased towards people with a darker skin colour”

Complicity in genocide: Follow the Money reveals Dutch universities’ ties with Israel’s defence industry through EU Horizon Projects

An investigation by Follow the Money reveals that Dutch universities are participating in at least 28 European Union-funded research projects with Israeli partners that potentially benefit the Israeli military, despite EU rules prohibiting military applications.

Continue reading “Complicity in genocide: Follow the Money reveals Dutch universities’ ties with Israel’s defence industry through EU Horizon Projects”

New York City uses a secret Child Welfare Algorithm

New York City’s Administration for Children’s Services (ACS) has been secretly using an AI risk assessment system since 2018 to flag families for additional investigation. This Markup investigation reveals how this algorithm mainly affects families of colour and raises serious questions about algorithmic bias against racialised and poor families in child welfare.

Continue reading “New York City uses a secret Child Welfare Algorithm”

Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster

Amsterdam officials’ technosolutionist way of thinking struck once again: they believed they could build technology that would prevent fraud while protecting citizens’ rights through their “Smart Check” AI system.

Continue reading “Racist Technology in Action: How the municipality of Amsterdam tried to roll out a ‘fair’ fraud detection algorithm. Spoiler alert: it was a disaster”

Amnesty report (yet again) exposes racist AI in UK police forces

Amnesty International UK’s report Automated Racism (from last February, PDF), reveals that almost three-quarters of UK police forces use discriminatory predictive policing systems that perpetuate racial profiling. At least 33 deploy AI tools that predict crime locations and profile individuals as future criminals based on biased historical data, perpetuating and entrenching racism and inequality.

Continue reading “Amnesty report (yet again) exposes racist AI in UK police forces”

Join the (Dutch) Masterclass Net Politics!

To successfully govern our information society in the years to come, a solid understanding of technological developments is crucial. The Masterclass Net Politics is a series of meetings with speakers from Amnesty International, Bits of Freedom, Internet Society Netherlands, Open State Foundation, PublicSpaces, SetUp, Waag Futurelab, and yours truly: The Racism and Technology Center.

Continue reading “Join the (Dutch) Masterclass Net Politics!”

Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa

For a day or so, Musk’s Grok AI chatbot would add its belief in a “white genocide” in South Africa, by now a classic white supremacist fabrication, to nearly every answer it would give regardless of the question asked.

Continue reading “Racist Technology in Action: Grok AI is obsessively focused on the extreme right trope of “white genocide” in South Africa”

Dutch Institute for Human Rights creates an evaluation framework for risk profiling and urges organisations to do more to prevent discrimination based on race and nationality

The Dutch Institute for Human Rights has published an evaluation framework for risk profiling intending to prevent discrimination based on race or nationality.

Continue reading “Dutch Institute for Human Rights creates an evaluation framework for risk profiling and urges organisations to do more to prevent discrimination based on race and nationality”

‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP

We all know that racist algorithms can harm people across many sectors, and healthcare is no exception. In a powerful commentary published in CellPress, Ferryman et al. argue that racism must be treated as a core ethical issue in healthcare AI, not merely a flaw to be patched after deployment.

Continue reading “‘Ethical’ AI in healthcare has a racism problem, and it needs to be fixed ASAP”

In Spain, an algorithm used by police to ‘combat’ gender violence determines whether women live or die

Lobna Hemid. Stefany González Escarraman. Eva Jaular (and her 11-month-old baby). The lives of these three women and an infant, amongst many others, tragically ended due to gender-related killings in Spain. As reported in this article, they were all classified as “low” or “negligible” risk by VioGén, despite reporting abuse to the police. In the case of Lobna Hemid, after reporting her husband’s abuse to the police and being assessed as “low risk” by VioGén, the police provided her with minimal protection, and weeks later, her husband stabbed her to death.

Continue reading “In Spain, an algorithm used by police to ‘combat’ gender violence determines whether women live or die”

A GPT tool deployed by the US government further facilitates and normalises fascism

The US Army has adopted an artificial intelligence tool called ‘CamoGPT’ to systematically remove and exclude references to diversity, equity, inclusion, and accessibility (DEIA) from its training materials. This initiative aligns with an executive order from President Trump, signed on January 27th, titled Restoring America’s Fighting Force, which mandates the elimination of policies perceived as promoting “un-American, divisive, discriminatory, radical, extremist, and irrational theories” concerning race and gender.

Continue reading “A GPT tool deployed by the US government further facilitates and normalises fascism”

In reaction to censorship and the genocide in Gaza, the Muslim community creates its own technological infrastructure

If you want to send money to help the Palestinian cause, there is a high chance your payment might get blocked somewhere. Same thing if you want to make a game about the Palestine experience and are trying to get it published in the app stores.

Continue reading “In reaction to censorship and the genocide in Gaza, the Muslim community creates its own technological infrastructure”

Racist Technology in Action: Grok’s total lack of safeguards against generating racist content

Grok is the chatbot made by xAI, a startup founded by Elon Musk, and is the generative AI solution that is powering X (née Twitter). It has recently gained a new power to generate photorealistic images, including those of celebrities. This is a problem as its ‘guardrails’ are lacking: it willingly generates racist and other deeply problematic images.

Continue reading “Racist Technology in Action: Grok’s total lack of safeguards against generating racist content”

Congolese government files complaint against Apples’ complicity in violence in Congo

On 16 December 2024, the Democratic Republic of Congo filed criminal complaints against Apple and its subsidiaries in France and Belgium for concealing war crimes – pillaging and concealing the role of “blood minerals” – in its international supply chains, laundering forged materials and misleading consumers. They argue that Apple is complicit in crimes that are taking place in Congo.

Continue reading “Congolese government files complaint against Apples’ complicity in violence in Congo”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑