Het thema van Prinsessendag 2024 is AI & Politica. Spreker Robin Pocornie ligt in haar column toe waarom dit thema zo veel invloed heeft op het huidige politieke landschap.
By Robin Pocornie for Nederlandse Vrouwenraad on August 14, 2024
Het thema van Prinsessendag 2024 is AI & Politica. Spreker Robin Pocornie ligt in haar column toe waarom dit thema zo veel invloed heeft op het huidige politieke landschap.
By Robin Pocornie for Nederlandse Vrouwenraad on August 14, 2024
Two Google workers have resigned and another was fired over a project providing AI and cloud services to the Israeli government and military.
By Billy Perrigo for Time on April 10, 2024
A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same Whatsapp group as a suspected militant. Where are they getting this data? Is WhatsApp sharing it?
By Paul Biggar for Paul Biggar on April 16, 2024
Israeli intelligence sources reveal use of ‘Lavender’ system in Gaza war and claim permission given to kill civilians in pursuit of low-ranking militants.
By Bethan McKernan and Harry Davies for The Guardian on April 3, 2024
Er werd beloofd dat deze systemen eerlijker zouden worden, maar uit het nieuwe jaarverslag van de Autoriteit Persoonsgegevens blijkt dat er sindsdien weinig is verbeterd. Algoritmes categoriseren mensen met bepaalde kenmerken nog steeds onterecht als risico. Noëlle Cecilia, medeoprichter van Brush AI (@ai.brush) was zondag te gast bij Mandy. Zij maakt algoritmes voor bedrijven en deed een jaar lang onderzoek naar de eerlijkheid en discriminatie ervan. Zij legt ons uit waarom de mindset moet veranderen bij het ontwikkelen van AI-systemen.
By Noëlle Cecilia for Instagram on July 9, 2024
The digital divide seems to have flipped.
From The Economist on June 27, 2024
Using the method of jail(break)ing to study how the visualities of sensitive issues transform under the gaze of OpenAI ’s GPT 4o, we found that: -Jail(break)ing takes place when the prompts force the model to combine jailing (transforming or fine-tuning content to comply with content restrictions) and jailbreaking (attempting to bypass or circumvent these restrictions). – Image-to-text generation allows more space for controversy than text-to-image. – Visual outputs reveal issue-specific and shared transformation patterns for charged, ambiguous, or divisive artefacts. – These patterns include foregrounding the background or ‘dressing up’ (porn), imitative disambiguation (memes), pink-washing (protest), cartoonization/anonymization (war), and exaggeration of style (art).
By Alexandra Rosca, Elena Pilipets, Energy Ng, Esmée Colbourne, Marina Loureiro, Marloes Geboers, and Riccardo Ventura for Digital Methods Initiative on August 6, 2024
Using a very clever methodology, this year’s Digital Method Initiative Summer School participants show how generative AI models like OpenAI’s GTP-4o will “dress up” controversial topics when you push the model to work with controversial content, like war, protest, or porn.
Continue reading “Generative AI’s ability to ‘pink-wash’ Black and Queer protests”In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.
Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”AI that purports to read our feelings may enhance user experience but concerns over misuse and bias mean the field is fraught with potential dangers.
By Ned Carter Miles for The Guardian on June 23, 2024
Algorithm Watch experimented with three major generative AI tools, generating 8,700 images of politicians. They found that all these tools make an active effort to lessen bias, but that the way they attempt to do this is problematic.
Continue reading “How generative AI tools represent EU politicians: in a biased way”Our own Hans de Zwart was a guest in the ‘Met Nerds om Tafel’ podcast. With Karen Palmer (creator of Consensus Gentium, a film about surveillance that watches you back), they discussed the role of art and storytelling in getting us ready for the future.
Continue reading “Podcast: Art as a prophetic activity for the future of AI”In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.
By Nicolas Kayser-Bril for AlgorithmWatch on May 29, 2024
It’s time to correct autocorrect.
From I am not a typo
‘I am not a typo’ campaign is calling for technology companies to make autocorrect less ‘western- and white-focused’.
By Robert Booth for The Guardian on May 22, 2024
Het was te mooi om waar te zijn: een algoritme om fraude in de bijstand op te sporen. Ondanks waarschuwingen bleef de gemeente Rotterdam er bijna vier jaar lang in geloven. Een handjevol ambtenaren, zich onvoldoende bewust van ethische risico’s, kon jarenlang ongestoord experimenteren met de data van kwetsbare mensen.
By Romy van Dijk and Saskia Klaassen for Vers Beton on October 23, 2023
Discriminerend algoritme: Volgens een onderzoek discrimineerde het algoritme dat Buitenlandse Zaken gebruikt om visumaanvragen te beoordelen. Uit onvrede met die conclusie vroeg het ministerie om een second opinion.
By Carola Houtekamer and Merijn Rengers for NRC on May 1, 2024
The ubiquitous availability of AI has made plagiarism detection software utterly useless, argues our Hans de Zwart in the Volkskrant.
Continue reading “AI detection has no place in education”This Atlantic conversation between Matteo Wong and Abeba Birhane touches on some critical issues surrounding the use of large datasets to train AI models.
Continue reading “The datasets to train AI models need more checks for harmful and illegal materials”Many AI bros are feverishly trying to attain what they call “Artificial General Intelligence” or AGI. In a piece on Medium, David Golumbia outlines connections between this pursuit of AGI and white supremacist thinking around “race science”.
Continue reading “White supremacy and Artificial General Intelligence”Generative AI uses particular English words way more than you would expect. Even though it is impossible to know for sure that a particular text was written by AI (see here), you can say something about that in aggregate.
Continue reading “Racist Technology in Action: Outsourced labour in Nigeria is shaping AI English”Workers in Africa have been exploited first by being paid a pittance to help make chatbots, then by having their own words become AI-ese. Plus, new AI gadgets are coming for your smartphones.
By Alex Hern for The Guardian on April 16, 2024
It may seem improbable at first glance to think that there might be connections between the pursuit of artificial general intelligence (AGI) and white supremacy. Yet the more you examine the question the clearer and more disturbing the links get.
By David Golumbia for David Golumbia on Medium on January 21, 2019
This is how these bosses get rich: by hiding underpaid, unrecognised human work behind the trappings of technology, says the writer and artist James Bridle.
By James Bridle for The Guardian on April 10, 2024
Bloomberg did a clever experiment: they had OpenAI’s GPT rank resumes and found that it shows a gender and racial bias just on the basis of the name of the candidate.
Continue reading “OpenAI’s GPT sorts resumes with a racial bias”Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone.
By Davey Alba, Leon Yin, and Leonardo Nicoletti for Bloomberg on March 8, 2024
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
Opposing technology isn’t antithetical to progress.
By Tom Humberstone for MIT Technology Review on February 28, 2024
Researchers found that certain prejudices also worsened as models grew larger.
By James O’Donnell for MIT Technology Review on March 11, 2024
An explanation of how the issues with Gemini’s image generation of people happened, and what we’re doing to fix it.
By Prabhakar Raghavan for The Keyword on February 23, 2024
It’s hard to keep a stereotyping machine out of trouble.
By Russell Brandom for Rest of World on February 29, 2024
In a shallow attempt to do representation for representation’s sake, Google has managed to draw the ire of the right-wing internet by generating historically inaccurate and overly inclusive portraits of historical figures.
Continue reading “Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move”Students are using ChatGPT for writing their essays. Antiplagiarism tools are trying to detect whether a text was written by AI. It turns out that these type of detectors consistently misclassify the text of non-native speakers as AI-generated.
Continue reading “Racist Technology in Action: ChatGPT detectors are biased against non-native English writers”Volgens bijzonder hoogleraar digitale surveillance Marc Schuilenburg hebben wij geen geheimen meer. Bij alles wat we doen kijkt er wel iets of iemand mee die onze gangen registreert. We weten het, maar doen er gewoon aan mee. Zo diep zit digitale surveillance in de haarvaten van onze samenleving: ‘We herkennen het vaak niet eens meer.’
By Marc Schuilenburg and Sebastiaan Brommersma for Follow the Money on February 4, 2024
Machine learning is the process behind increasingly pervasive and often proprietary tools like ChatGPT, facial recognition, and predictive policing programs. But these artificial intelligence programs are only as good as their training data. When the data smuggle in a host of racial, gender, and other inequalities, biased outputs become the norm.
By Catherine Yeh and Sharla Alegria for SAGE Journals on November 15, 2023
The labour movement has a vital role to play and will grow in importance in 2024, says Timnit Gebru of the Distributed AI Research Institute.
By Timnit Gebru for The Economist on November 13, 2023
A conversation with Dr. Joy Buolamwini.
By Joy Buolamwini and Nabiha Syed for The Markup on November 18, 2023
Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.
By Kevin Schaul, Nitasha Tiku and Szu Yu Chen for Washington Post on November 20, 2023
As Barbie-mania grips the world, the peppy cultural icon deserves thanks for helping to illustrate a darker side of artificial intelligence.
By Paige Collings and Rory Mir for Salon on August 17, 2023
Photographs were seen as less realistic than computer images but there was no difference with pictures of people of colour.
By Nicola Davis for The Guardian on November 13, 2023
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.