Meta and Lavender

A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same Whatsapp group as a suspected militant. Where are they getting this data? Is WhatsApp sharing it?

By Paul Biggar for Paul Biggar on April 16, 2024

Na het toeslagen schandaal, waarbij onder andere veel eenoudergezinnen en gezinnen met een migratieachtergrond onterecht van fraude werden beschuldigd, werd pijnlijk duidelijk dat niet alleen mensen discrimineren, maar algoritmes ook

Er werd beloofd dat deze systemen eerlijker zouden worden, maar uit het nieuwe jaarverslag van de Autoriteit Persoonsgegevens blijkt dat er sindsdien weinig is verbeterd. Algoritmes categoriseren mensen met bepaalde kenmerken nog steeds onterecht als risico. Noëlle Cecilia, medeoprichter van Brush AI (@ai.brush) was zondag te gast bij Mandy. Zij maakt algoritmes voor bedrijven en deed een jaar lang onderzoek naar de eerlijkheid en discriminatie ervan. Zij legt ons uit waarom de mindset moet veranderen bij het ontwikkelen van AI-systemen.

By Noëlle Cecilia for Instagram on July 9, 2024

War, Memes, Art, Protest, and Porn: Jail(break)ing Synthetic Imaginaries Under OpenAI ’s Content Policy Restrictions

Using the method of jail(break)ing to study how the visualities of sensitive issues transform under the gaze of OpenAI ’s GPT 4o, we found that: -Jail(break)ing takes place when the prompts force the model to combine jailing (transforming or fine-tuning content to comply with content restrictions) and jailbreaking (attempting to bypass or circumvent these restrictions). – Image-to-text generation allows more space for controversy than text-to-image. – Visual outputs reveal issue-specific and shared transformation patterns for charged, ambiguous, or divisive artefacts. – These patterns include foregrounding the background or ‘dressing up’ (porn), imitative disambiguation (memes), pink-washing (protest), cartoonization/anonymization (war), and exaggeration of style (art).

By Alexandra Rosca, Elena Pilipets, Energy Ng, Esmée Colbourne, Marina Loureiro, Marloes Geboers, and Riccardo Ventura for Digital Methods Initiative on August 6, 2024

Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts

In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.

Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”

Fouten herstellen we later wel: hoe de gemeente een dubieus algoritme losliet op Rotterdammers

Het was te mooi om waar te zijn: een algoritme om fraude in de bijstand op te sporen. Ondanks waarschuwingen bleef de gemeente Rotterdam er bijna vier jaar lang in geloven. Een handjevol ambtenaren, zich onvoldoende bewust van ethische risico’s, kon jarenlang ongestoord experimenteren met de data van kwetsbare mensen.

By Romy van Dijk and Saskia Klaassen for Vers Beton on October 23, 2023

The Great White Robot God

It may seem improbable at first glance to think that there might be connections between the pursuit of artificial general intelligence (AGI) and white supremacy. Yet the more you examine the question the clearer and more disturbing the links get.

By David Golumbia for David Golumbia on Medium on January 21, 2019

Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move

In a shallow attempt to do representation for representation’s sake, Google has managed to draw the ire of the right-wing internet by generating historically inaccurate and overly inclusive portraits of historical figures.

Continue reading “Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move”

‘Vergeet de controlestaat, we leven in een controlemaatschappij’

Volgens bijzonder hoogleraar digitale surveillance Marc Schuilenburg hebben wij geen geheimen meer. Bij alles wat we doen kijkt er wel iets of iemand mee die onze gangen registreert. We weten het, maar doen er gewoon aan mee. Zo diep zit digitale surveillance in de haarvaten van onze samenleving: ‘We herkennen het vaak niet eens meer.’

By Marc Schuilenburg and Sebastiaan Brommersma for Follow the Money on February 4, 2024

Machine Learning and the Reproduction of Inequality

Machine learning is the process behind increasingly pervasive and often proprietary tools like ChatGPT, facial recognition, and predictive policing programs. But these artificial intelligence programs are only as good as their training data. When the data smuggle in a host of racial, gender, and other inequalities, biased outputs become the norm.

By Catherine Yeh and Sharla Alegria for SAGE Journals on November 15, 2023

This is how AI image generators see the world

Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.

By Kevin Schaul, Nitasha Tiku and Szu Yu Chen for Washington Post on November 20, 2023

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑