With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.
By Garrison Lovely for Jacobin on January 22, 2025
With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.
By Garrison Lovely for Jacobin on January 22, 2025
This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024
UC Berkeley recently discovered a fund established in 1975 to fund research into eugenics. Nowadays, our (avowed) perspective on this ideology has changed, so they repurposed the fund and commissioned a series on the legacies of eugenics for the LA Review of Books.
Continue reading “Ruha Benjamin on Eugenics 2.0”In the fifth essay of the Legacies of Eugenics series, Ruha Benjamin explores how AI evangelists wrap their self-interest in a cloak of humanistic concern.
By Ruha Benjamin for Los Angeles Review of Books on October 18, 2024
Using the method of jail(break)ing to study how the visualities of sensitive issues transform under the gaze of OpenAI ’s GPT 4o, we found that: -Jail(break)ing takes place when the prompts force the model to combine jailing (transforming or fine-tuning content to comply with content restrictions) and jailbreaking (attempting to bypass or circumvent these restrictions). – Image-to-text generation allows more space for controversy than text-to-image. – Visual outputs reveal issue-specific and shared transformation patterns for charged, ambiguous, or divisive artefacts. – These patterns include foregrounding the background or ‘dressing up’ (porn), imitative disambiguation (memes), pink-washing (protest), cartoonization/anonymization (war), and exaggeration of style (art).
By Alexandra Rosca, Elena Pilipets, Energy Ng, Esmée Colbourne, Marina Loureiro, Marloes Geboers, and Riccardo Ventura for Digital Methods Initiative on August 6, 2024
Using a very clever methodology, this year’s Digital Method Initiative Summer School participants show how generative AI models like OpenAI’s GTP-4o will “dress up” controversial topics when you push the model to work with controversial content, like war, protest, or porn.
Continue reading “Generative AI’s ability to ‘pink-wash’ Black and Queer protests”In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.
By Nicolas Kayser-Bril for AlgorithmWatch on May 29, 2024
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.