A conversation with Dr. Joy Buolamwini.
By Joy Buolamwini and Nabiha Syed for The Markup on November 18, 2023
A conversation with Dr. Joy Buolamwini.
By Joy Buolamwini and Nabiha Syed for The Markup on November 18, 2023
Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.
By Kevin Schaul, Nitasha Tiku and Szu Yu Chen for Washington Post on November 20, 2023
As Barbie-mania grips the world, the peppy cultural icon deserves thanks for helping to illustrate a darker side of artificial intelligence.
By Paige Collings and Rory Mir for Salon on August 17, 2023
Photographs were seen as less realistic than computer images but there was no difference with pictures of people of colour.
By Nicola Davis for The Guardian on November 13, 2023
By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’.
By Johana Bhuiyan for The Guardian on November 3, 2023
Tumoren ontdekken, nieuwe medicijnen ontwikkelen – beloftes genoeg over wat kunstmatige intelligentie kan betekenen voor de medische wereld. Maar voordat je zulk belangrijk werk kunt overlaten aan technologie, moet je precies snappen hoe die werkt. En zover zijn we nog lang niet.
By Maurits Martijn for De Correspondent on November 6, 2023
Ik ga in gesprek met Cynthia Liem. Zij is onderzoeker op het gebied van betrouwbare en verantwoorde kunstmatige intelligentie aan de TU Delft. Cynthia is bekend van haar analyse van de fraudedetectie-algoritmen die de Belastingdienst gebruikte in het toeslagenschandaal.
By Cynthia Liem and Ilyaz Nasrullah for BNR Nieuwsradio on October 20, 2023
The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems.
By Joy Buolamwini and Melissa Heikkilä for MIT Technology Review on October 29, 2023
Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.
By Naiara Bellio and Nicolas Kayser-Bril for AlgorithmWatch on November 2, 2023
Researchers were curious if artificial intelligence could fulfill the order. Or would built-in biases short-circuit the request? Let’s see what an image generator came up with.
By Carmen Drahl for National Public Radio on October 6, 2023
Parent company Meta says bug caused ‘inappropriate’ auto-translations and was now fixed while employee says it pushed ‘a lot of people over the edge’.
By Josh Taylor for The Guardian on October 20, 2023
A software company sold a New Jersey police department an algorithm that was right less than 1 percent of the time.
By Aaron Sankin and Surya Mattu for WIRED on October 2, 2023
Our research found that AI image generators show bias when tasked with imaging non-Western subjects.
By Victoria Turk for Rest of World on October 10, 2023
In a world where swiping left or right is the main route to love, whose profiles dating apps show you can change the course of your life.
Continue reading “Equal love: Dating App Breeze seeks to address Algorithmic Discrimination”The use of and reliance on machine translation tools in asylum seeking procedures has become increasingly common amongst government contractors and organisations working with refugees and migrants. This Guardian article highlights many of the issues documented by Respond Crisis Translation, a network of people who provide urgent interpretation services for migrants and refugees. The problems with machine translation tools occur throughout the asylum process, from border stations to detention centers to immigration courts.
Continue reading “Use of machine translation tools exposes already vulnerable asylum seekers to even more risks”In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.
Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä for MIT Technology Review on September 25, 2023
No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.
By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023
The Philippines is one of the countries that has more than two million people perform crowdwork, such as data annotation, according to informal government estimates.
Continue reading “Filipino workers in “digital sweatshops” train AI models for the West”US government tests find even top-performing facial recognition systems misidentify blacks at rates 5 to 10 times higher than they do whites.
By Tom Simonite for WIRED on July 22, 2019
The pictures i gave for reference are simple selfies of my face ONLY. But still, the AI oversexualized me due to my features that have been fetishized for centuries. AI is biased for POC. I’m horrified.
By Lana Denina for Twitter on July 15, 2023
Rona Wang, who is Asian American, said the AI gave her “features that made me look Caucasian.”
By Rona Wang and Spencer Buell for The Boston Globe on July 19, 2023
Black artists have been tinkering with machine learning algorithms in their artistic projects, surfacing many questions about the troubling relationship between AI and race, as reported in the New York Times.
Continue reading “Black artists show how generative AI ignores, distorts, erases and censors their histories and cultures”Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.
By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023
As EU institutions start decisive meetings on the Artificial Intelligence (AI) Act, a broad civil society coalition is urging them to prioritise people and fundamental rights.
From European Digital Rights (EDRi) on July 12, 2023
Bloomberg’s researchers used Stable Diffusion to gauge the magnitude of biases in generative AI. Through an analysis of more than 5,000 images created by Stable Diffusion, they have found that it takes racial and gender disparities to extremes. The results are worse than those found in the real world.
Continue reading “Racist Technology in Action: Stable Diffusion exacerbates and amplifies racial and gender disparities”Tech companies acknowledge machine-learning algorithms can perpetuate discrimination and need improvement.
By Zachary Small for The New York Times on July 4, 2023
Afghan refugees’ asylum claims are being rejected because of bad AI translations of Pashto and Dari.
By Andrew Deck for Rest of World on April 19, 2023
Text-to-image models amplify stereotypes about race and gender — here’s why that matters.
By Dina Bass and Leonardo Nicoletti for Bloomberg on June 1, 2023
Google wants to “help computers ‘see’ our world”, and one of their ways of battling how current AI and machine learning systems perpetuate biases is to introduce a more inclusive scale of skin tone, the ‘Monk Skin Tone Scale’.
Continue reading “Representing skin tone, or Google’s hubris versus the simplicity of Crayola”In this eloquent and haunting piece by Hito Steyerl, she weaves the ongoing narratives of the eugenicist history of statistics with its integration into machine learning. She elaborates why the attempts to eliminate bias in facial recognition technology through diversifying datasets obscures the root of the problem: machine learning and automation are fundamentally reliant on extracting and exploiting human labour.
Continue reading “Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem”If this title feels like a deja-vu it is because you most likely have, in fact, seen this before (perhaps even in our newsletter). It was back in 2015 that the controversy first arose when Google released image recognition software that kept mislabelling Black people as gorillas (read here and here).
Continue reading “Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people”More and more prominent tech figures are voicing concerns about superintelligent AI and risks to the future of humanity. But as leading AI ethicist Timnit Gebru and researcher Émile P Torres point out, these ideologies have deeply racist foundations.
By Samara Linton for POCIT on May 24, 2023
On June 1, Democracy Now featured a roundtable discussion hosted by Amy Goodman and Nermeen Shaikh, with three experts on Artificial Intelligence (AI), about their views on AI in the world. They included Yoshua Bengio, a computer scientist at the Université de Montréal, long considered a “godfather of AI,” Tawana Petty, an organiser and Director of Policy at the Algorithmic Justice League (AJL), and Max Tegmark, a physicist at the Massachusetts Institute of Technology. Recently, the Future of Life Institute, of which Tegmark is president, issued an open letter “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Bengio is a signatory on the letter (as is Elon Musk). The AJL has been around since 2016, and has (along with other organisations) been calling for a public interrogation of racialised surveillance technology, the use of police robots, and other ways in which AI can be directly responsible for bodily harm and even death.
By Yasmin Nair for Yasmin Nair on June 3, 2023
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.
By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023
Eight years after a controversy over Black people being mislabeled as gorillas by image analysis software — and despite big advances in computer vision — tech giants still fear repeating the mistake.
By Kashmir Hill and Nico Grant for The New York Times on May 22, 2023
The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI. But after decades working with technology companies, she knows all too much about discrimination.
By John Harris for The Guardian on May 22, 2023
Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Skin tone is an observable characteristic that is subjective, perceived differently by individuals (e.g., depending on their location or culture) and thus is complicated to annotate. That said, the ability to reliably and accurately annotate skin tone is highly important in computer vision. This became apparent in 2018, when the Gender Shades study highlighted that computer vision systems struggled to detect people with darker skin tones, and performed particularly poorly for women with darker skin tones. The study highlights the importance for computer researchers and practitioners to evaluate their technologies across the full range of skin tones and at intersections of identities. Beyond evaluating model performance on skin tone, skin tone annotations enable researchers to measure diversity and representation in image retrieval systems, dataset collection, and image generation. For all of these applications, a collection of meaningful and inclusive skin tone annotations is key.
By Candice Schumann and Gbolahan O. Olanubi for Google AI Blog on May 15, 2023
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.