Vooral vrouwen van kleur klagen de vooroordelen van AI aan

Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.

By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023

Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’

Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.

By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019

Rotterdam’s use of algorithms could lead to ethnic profiling

The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.

Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”

Digital Ethics in Higher Education: 2020

New technologies, especially those relying on artificial intelligence or data analytics, are exciting but also present ethical challenges that deserve our attention and action. Higher education can and must lead the way.

By John O’Brien for EDUCAUSE Review on May 18, 2020

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑