She had to catfish herself as a white woman to get a job: AI-mediated racism on LinkedIn and in recruiting

After being ghosted by numerous recruiters during her unemployment, Aliyah Jones, a Black woman, decided to create a LinkedIn ‘catfish’ account under the name Emily Osborne, a blonde-haired, blue-eyed white woman eager to advance her career in graphic design. The only difference between ‘Emily’ and Jones? Their names and skin colour. Their work experience and capabilities were the same.

Continue reading “She had to catfish herself as a white woman to get a job: AI-mediated racism on LinkedIn and in recruiting”

Racist Technology in Action: The algorithm that was supposed to match asylum seekers to places with jobs doesn’t work and is discriminatory

For many years and for many people, GeoMatch by the Immigration Policy Lab was a shining example of ‘AI for Good’: instead of using algorithms to find criminals or fraud, why don’t we use it to allocate asylum seekers to regions that give them the most job opportunities? Only the naive can be surprised that this didn’t work out as promised.

Continue reading “Racist Technology in Action: The algorithm that was supposed to match asylum seekers to places with jobs doesn’t work and is discriminatory”

Congolese government files complaint against Apples’ complicity in violence in Congo

On 16 December 2024, the Democratic Republic of Congo filed criminal complaints against Apple and its subsidiaries in France and Belgium for concealing war crimes – pillaging and concealing the role of “blood minerals” – in its international supply chains, laundering forged materials and misleading consumers. They argue that Apple is complicit in crimes that are taking place in Congo.

Continue reading “Congolese government files complaint against Apples’ complicity in violence in Congo”

Why ‘open’ AI systems are actually closed, and why this matters

This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.

By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024

Tech Workers’ Testimonies: Stories of Suppression of Palestinian Advocacy in the Workplace

The Arab Center for the Advancement of Social Media has released a new report titled, “Delete the Issue: Tech Worker Testimonies on Palestinian Advocacy and Workplace Suppression.” The report, the first of its kind, shares testimonies gathered from current and former employees in major technology companies, including Meta, Google, PayPal, Microsoft, LinkedIn, and Cisco. It highlights their experiences supporting Palestinian rights in the workplace and the companies’ efforts to restrict freedom of expression on the matter.

From 7amleh on November 11, 2024

Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people

Since 2021, thousands of Amazon and Google tech workers have been organising against Project Nimbus, Google and Amazon’s shared USD$1.2 billion contract with the Israeli government and military. Since then, there has been no response from management or executive. Their organising efforts have accelerated since 7 October 2023, with the ongoing genocide on Gaza and occupied Palestinian territories by the Israeli state.

Continue reading “Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people”

­Data Work and its Layers of (In)visibility

No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.

By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023

Mean Images

An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.

By Hito Steyerl for New Left Review on April 28, 2023

We come to bury ChatGPT, not to praise it.

Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in ‘The cat sat on the [BLANK]’). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it’s still a computational guessing game. ChatGPT is, in technical terms, a ‘bullshit generator’.

By Dan McQuillan for Dan McQuillan on February 6, 2023

The cheap, racialised, Kenyan workers making ChatGPT “safe”

Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.

Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑