What problems are AI-systems even solving? “Apparently, too few people ask that question”

In this interview with Felienne Hermans, Professor Computer Science at the Vrije Universiteit Amsterdam, she discusses the sore lack of divesity in the white male-dominated world of programming, the importance of teaching people how to code and, the problematic uses of AI-systems.

Continue reading “What problems are AI-systems even solving? “Apparently, too few people ask that question””

Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.

In another investigation by The Markup, significant racial disparities were found in the assessment system used by the Los Angeles Homeless Services Authority (LAHSA), the body responsible for coordinating homelessness services in Los Angeles. This assessment system is reliant on a tool, called the Vulnerability Index-Service Prioritisation Decision Assistance Tool, or VI-SPDAT, to score and assess whether people can qualify for subsidised permanent housing.

Continue reading “Racist Technology in Action: Racial disparities in the scoring system used for housing allocation in L.A.”

Stories of everyday life with AI in the global majority

This collection by the Data & Society Research Institute sheds an intimate and grounded light on what impact AI-systems can have. The guiding question that connects all of the 13 non-fiction pieces in Parables of AI in/from the Majority world: An Anthology is what stories can be told about a world in which solving societal issues is more and more dependent on AI-based and data-driven technologies? The book, edited by Rigoberto Lara Guzmán, Ranjit Singh and Patrick Davison, through narrating ordinary, everyday experiences in the majority world, slowly disentangles the global and unequally distributed impact of digital technologies.

Continue reading “Stories of everyday life with AI in the global majority”

Denmark’s welfare fraud system reflects a deeply racist and exclusionary society

As part of a series of investigative reporting by Lighthouse Reports and WIRED, Gabriel Geiger has revealed some of the findings about the use of welfare fraud algorithms in Denmark. This comes in the trajectory of the increasing use of algorithmic systems to detect welfare fraud across European cities, or at least systems which are currently known.

Continue reading “Denmark’s welfare fraud system reflects a deeply racist and exclusionary society”

The cheap, racialised, Kenyan workers making ChatGPT “safe”

Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.

Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””

Quantifying bias in society with ChatGTP-like tools

ChatGPT is an implementation of a so-called ‘large language model’. These models are trained on text from the internet at large. This means that these models inherent the bias that exists in our language and in our society. This has an interesting consequence: it suddenly becomes possible to see how bias changes through the times in a quantitative and undeniable way.

Continue reading “Quantifying bias in society with ChatGTP-like tools”

Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs

This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.

Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”

What’s at stake with losing (Black) Twitter and moving to (white) Mastodon?

The immanent demise of Twitter after Elon Musk’s takeover sparked an exodus of people leaving the platform, which is only expected to increase. The significant increase in hate speech, and general hostile atmosphere created by the erratic decrees by it’s owner (such as Trump’s reinstatement) made, in the New Yorker writer Jelani Cobb’s words, “remaining completely untenable”. This, often vocal, movement of people from the platform has sparked a debate on what people stand to loose and what the alternative is.

Continue reading “What’s at stake with losing (Black) Twitter and moving to (white) Mastodon?”

Profiting off Black bodies

Tiera Tanksley’s work seeks to better understand how forms of digitally mediated traumas, such as seeing images of Black people dead and dying on social media, are impacting Black girls’ mental and emotional wellness in the U.S. and Canada. Her fears were confirmed in her findings: Black girls report unprecedented levels of fear, depression, anxiety and chronic stress. Viewing Black people being killed by the state was deeply traumatic, with mental, emotional and physiological effects.

Continue reading “Profiting off Black bodies”

Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course

Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.

Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”

Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)

Dutch student Robin Pocornie filed a complaint with Dutch Institute for Human Rights. The surveillance software that her university used, had trouble recognising her as human being because of her skin colour. After a hearing, the Institute has now ruled that Robin has presented enough evidence to assume that she was indeed discriminated against. The ball is now in the court of the VU (her university) to prove that the software treated everybody the same.

Continue reading “Dutch Institute for Human Rights: Use of anti-cheating software can be algorithmic discrimination (i.e. racist)”

Report: How police surveillance tech reinforces abuses of power

The UK organisation No Tech for Tyrants (NT4T) has published an extensive report on the use of surveillance technologies by the police in the UK, US, Mexico, Brazil, Denmark and India, in collaboration with researchers and activists from these countries. The report, titled “Surveillance Tech Perpetuates Police Abuse of Power” examines the relation between policing and technology through in-depth case studies.

Continue reading “Report: How police surveillance tech reinforces abuses of power”

Racist Technology in Action: AI-generated image tools amplify harmful stereotypes

Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.

Continue reading “Racist Technology in Action: AI-generated image tools amplify harmful stereotypes”

The devastating consequences of risk based profiling by the Dutch police

Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”

Continue reading “The devastating consequences of risk based profiling by the Dutch police”

AI innovation for whom, and at whose expense?

This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.

Continue reading “AI innovation for whom, and at whose expense?”

Whitewashing call centre workers’ accents

Silicon Valley strikes again, with yet another techno-solutionist idea. Sanas, a speech technology startup founded by three former Stanford students, aims to alter the accents of call centre workers situated in countries such as India and the Philippines. The goal is to make them sound white and American. With a slide of a button, a call centre’s voice will be transformed into a slightly robotic, and unmistakeably white, American voice.

Continue reading “Whitewashing call centre workers’ accents”

AI-trained robots bring algorithmic biases into robotics

A recent study in robotics has drawn attention from news media such as The Washington Post and VICE. In this study, researchers programmed virtual robots with popular artificial intelligence algorithms. Then, these robots were asked to scan blocks containing pictures of people’s faces and make decisions to put some blocks into a virtual “box” according to an open-ended instruction. In the experiments, researchers quickly found out that these robots repeatedly picked women and people of color to be put in the “box” when they were asked to respond to words such as “criminal”, “homemaker”, and “janitor”. The behaviors of these robots showed that sexist and racist baises coded in AI algorithms have leaked into the field of robotics.

Continue reading “AI-trained robots bring algorithmic biases into robotics”

Racist Technology in Action: How hiring tools can be sexist and racist

One of the classic examples of how AI systems can reinforce social injustice is Amazon’s A.I. hiring tool. In 2014, Amazon built an ´A.I. powered´ tool to assess resumes and recommend the top candidates that would go on to be interviewed. However, the tool turned out to be very biased, systematically preferring men over women.

Continue reading “Racist Technology in Action: How hiring tools can be sexist and racist”

Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university

During the pandemic, Dutch student Robin Pocornie had to do her exams with a light pointing straight at her face. Her fellow students who were White didn’t have to do that. Her university’s surveillance software discriminated her, and that is why she has filed a complaint (read the full complaint in Dutch) with the Netherlands Institute for Human Rights.

Continue reading “Dutch student files complaint with the Netherlands Institute for Human Rights about the use of racist software by her university”

Meta forced to change its advertisement algorithm to address algorithmic discrimination

In his New York Times article, Mike Isaac describes how Meta is implementing a new system to automatically check whether the housing, employment and credit ads it hosts are shown to people equally. This is a move following a 111,054 US dollar fine the US Justice Department has issued Meta because its ad systems have been shown to discriminate its users by, amongst other things, excluding black people from seeing certain housing ads in predominately white neighbourhoods. This is the outcome of a long process, which we have written about previously.

Continue reading “Meta forced to change its advertisement algorithm to address algorithmic discrimination”

Exploited and silenced: Meta’s Black whistleblower in Nairobi

In 2019, a Facebook content moderator in Nairobi, Daniel Motaung, who was paid USD 2.20 per hour, was fired. He was working for one of Meta’s largest outsourcing partners in Africa, Sama, which brands itself as an “ethical AI” outsourcing company, and is headquartered in California. Motaung led a unionisation attempt with more than 100 colleagues, fighting for better wages and working conditions.

Continue reading “Exploited and silenced: Meta’s Black whistleblower in Nairobi”

Racist Technology in Action: Turning a Black person, White

An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.

Continue reading “Racist Technology in Action: Turning a Black person, White”

Racist Techology in Action: Beauty is in the eye of the AI

Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”

Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑