The New Artificial Intelligentsia

In the fifth essay of the Legacies of Eugenics series, Ruha Benjamin explores how AI evangelists wrap their self-interest in a cloak of humanistic concern.

By Ruha Benjamin for Los Angeles Review of Books on October 18, 2024

Beyond Surveillance – The Case Against AI Detection and AI Proctoring

Are you an educator seeking a supportive space to critically examine AI surveillance tools? This workshop is for you. In an era where AI increasingly pervades education, AI detection and proctoring have sparked significant controversy. These tools, categorized as academic surveillance software, algorithmically monitor behaviour and movements. Students are increasingly forced to face them. Together, we will move beyond surveillance toward a culture of trust and transparency, shining a light on the black box of surveillance and discussing our findings. In this two-hour workshop, we will explore AI detection and proctoring through a 40-minute presentation, an hour of activities and discussion, and 20 minutes of group tool evaluation using a rubric.

By Ian Linkletter for BCcampus on September 18, 2024

Meta and Lavender

A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same Whatsapp group as a suspected militant. Where are they getting this data? Is WhatsApp sharing it?

By Paul Biggar for Paul Biggar on April 16, 2024

Na het toeslagen schandaal, waarbij onder andere veel eenoudergezinnen en gezinnen met een migratieachtergrond onterecht van fraude werden beschuldigd, werd pijnlijk duidelijk dat niet alleen mensen discrimineren, maar algoritmes ook

Er werd beloofd dat deze systemen eerlijker zouden worden, maar uit het nieuwe jaarverslag van de Autoriteit Persoonsgegevens blijkt dat er sindsdien weinig is verbeterd. Algoritmes categoriseren mensen met bepaalde kenmerken nog steeds onterecht als risico. Noëlle Cecilia, medeoprichter van Brush AI (@ai.brush) was zondag te gast bij Mandy. Zij maakt algoritmes voor bedrijven en deed een jaar lang onderzoek naar de eerlijkheid en discriminatie ervan. Zij legt ons uit waarom de mindset moet veranderen bij het ontwikkelen van AI-systemen.

By Noëlle Cecilia for Instagram on July 9, 2024

War, Memes, Art, Protest, and Porn: Jail(break)ing Synthetic Imaginaries Under OpenAI ’s Content Policy Restrictions

Using the method of jail(break)ing to study how the visualities of sensitive issues transform under the gaze of OpenAI ’s GPT 4o, we found that: -Jail(break)ing takes place when the prompts force the model to combine jailing (transforming or fine-tuning content to comply with content restrictions) and jailbreaking (attempting to bypass or circumvent these restrictions). – Image-to-text generation allows more space for controversy than text-to-image. – Visual outputs reveal issue-specific and shared transformation patterns for charged, ambiguous, or divisive artefacts. – These patterns include foregrounding the background or ‘dressing up’ (porn), imitative disambiguation (memes), pink-washing (protest), cartoonization/anonymization (war), and exaggeration of style (art).

By Alexandra Rosca, Elena Pilipets, Energy Ng, Esmée Colbourne, Marina Loureiro, Marloes Geboers, and Riccardo Ventura for Digital Methods Initiative on August 6, 2024

Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts

In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.

Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”

Fouten herstellen we later wel: hoe de gemeente een dubieus algoritme losliet op Rotterdammers

Het was te mooi om waar te zijn: een algoritme om fraude in de bijstand op te sporen. Ondanks waarschuwingen bleef de gemeente Rotterdam er bijna vier jaar lang in geloven. Een handjevol ambtenaren, zich onvoldoende bewust van ethische risico’s, kon jarenlang ongestoord experimenteren met de data van kwetsbare mensen.

By Romy van Dijk and Saskia Klaassen for Vers Beton on October 23, 2023

The Great White Robot God

It may seem improbable at first glance to think that there might be connections between the pursuit of artificial general intelligence (AGI) and white supremacy. Yet the more you examine the question the clearer and more disturbing the links get.

By David Golumbia for David Golumbia on Medium on January 21, 2019

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑