OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
Just upload a selfie in the “AI avatar app” Lensa and it will generate a digital portrait of you. Think, for example, of a slightly more fit or beautiful version of yourself as an astronaut or the lead singer in a band. If you are a man that is. As it turns out, for women, and especially women with Asian heritage, Lensa churns out pornified, sexy and skimpily clothed avatars.
Continue reading “Racist Technology in Action: Let’s make an avatar! Of sexy women and tough men of course”Unsurprisingly, the artistic and ethical shortcomings of AI image generators are tied to their dependence on capital and capitalism.
By Marco Donnarumma for Hyperallergic on October 24, 2022
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
By Melissa Heikkilä for MIT Technology Review on December 12, 2022
When large language models fall short, the consequences can be serious. Why is it so hard to acknowledge that?
By Abeba Birhane and Deborah Raji for WIRED on December 9, 2022
This is according to experts at the University of Cambridge, who suggest that current portrayals and stereotypes about AI risk creating a “racially homogenous”.
By Kanta Dihal and Stephen Cave for University of Cambridge on August 6, 2020
Social media app, Spill, designed by former Twitter employees, Alphonzo “Phonz” Terrell and DeVaris Brown, is becoming the chosen alternative for many.
By Kumba Kpakima for POCIT on December 21, 2022
Filters appear to be bypassed with simple tricks, and superficially masked. And what is lurking inside is egregious.
By Steven t. Piantadosi for Twitter on December 4, 2022
We are a non-profit creating more realistic and inclusive images of artificial intelligence. Visit our growing repository available for anyone to use for free under CC licences, or just to use as inspiration for more helpful and diverse representations of AI.
From Better Images of AI
Current methods to mitigate these effects fail to prevent images perpetuating racist, misogynist and otherwise problematic stereotypes.
By Justin Hendrix for Tech Policy Press on November 9, 2022
Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.
Continue reading “Racist Technology in Action: AI-generated image tools amplify harmful stereotypes”Galactica language model generated convincing text about fact and nonsense alike.
By Benj Edwards for Ars Technica on November 18, 2022
I asked it to write about linguistic prejudice.
By Rikker Dockum for Twitter on November 16, 2022
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
By Will Douglas Heaven for MIT Technology Review on November 18, 2022
This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.
Continue reading “AI innovation for whom, and at whose expense?”‘Effective Altruism’ is all the vogue, but deeply problematic.
Continue reading “Beware of ‘Effective Altruism’ and ‘Longtermism’”Capitol Music Group faced a backlash for signing the artificial intelligence musician.
From BBC on August 24, 2022
A resource by Sareeta Amrute, Ranjit Singh, and Rigoberto Lara Guzmán exploring the presence of artificial intelligence and technology in the Majority World. 160 thematic works, available in English and Spanish.
By Ranjit Singh, Rigoberto Lara Guzmán and Sareeta Amrute for Data & Society on September 14, 2022
Supporting transnational worker organizing should be at the center of the fight for “ethical AI.”
By Adrienne Williams, Milagros Miceli and Timnit Gebru for Noema on October 13, 2022
Sennay Ghebreab, head of the Civic AI Lab which aims to develop AI in a socially inclusive manner, was interviewed by Kustaw Bessems for the Volkskrant podcast Stuurloos (in Dutch).
Continue reading “Listen to Sennay Ghebreab for clarity about what AI should and shouldn’t do”Socialism is the most effective altruism. Who needs anything else? The repugnant philosophy of “Effective Altruism” offers nothing to movements for global justice.
By Nathan J. Robinson for Current Affairs on September 19, 2022
Je kunt al snel denken dat kunstmatige intelligentie alleen maar iets is om voor op te passen. Een machtig wapen in handen van de overheid of van techbedrijven die zich schuldig maken aan privacyschending, discriminatie of onterechte straffen. Maar we kunnen met algoritmen juist problemen oplossen en werken aan een rechtvaardiger wereld, zegt informaticus Sennay Ghebreab van het Civic AI Lab tegen Kustaw Bessems. Dan moeten we wel de basis een beetje snappen én er meer over te zeggen hebben.
By Kustaw Bessems and Sennay Ghebreab for Volkskrant on September 11, 2022
A recent study in robotics has drawn attention from news media such as The Washington Post and VICE. In this study, researchers programmed virtual robots with popular artificial intelligence algorithms. Then, these robots were asked to scan blocks containing pictures of people’s faces and make decisions to put some blocks into a virtual “box” according to an open-ended instruction. In the experiments, researchers quickly found out that these robots repeatedly picked women and people of color to be put in the “box” when they were asked to respond to words such as “criminal”, “homemaker”, and “janitor”. The behaviors of these robots showed that sexist and racist baises coded in AI algorithms have leaked into the field of robotics.
Continue reading “AI-trained robots bring algorithmic biases into robotics”An example of racial bias in machine learning strikes again, this time by a program called PULSE, as reported by The Verge. Input a low resolution image of Barack Obama – or another person of colour such as Alexandra Ocasio-Cortez or Lucy Liu – and the resulting AI-generated output of a high resolution image, is distinctively a white person.
Continue reading “Racist Technology in Action: Turning a Black person, White”The Justice Department had accused Meta’s housing advertising system of discriminating against Facebook users based on their race, gender, religion and other characteristics.
By Mike Isaac for The New York Times on June 21, 2022
The images represent a glitch in the system that even its creator can’t explain.
By Nilesh Christopher for Rest of World on June 22, 2022
The fuss about a bot’s ‘consciousness’ obscures far more troubling concerns.
By Kenan Malik for The Guardian on June 19, 2022
Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”
Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person’s race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient’s racial identity from medical images.
By Ananth Reddy Bhimireddy, Ayis T Pyrros, Brandon J. Price, Chima Okechukwu, Haoran Zhang, Hari Trivedi, Imon Banerjee, John L Burns, Judy Wawira Gichoya, Laleh Seyyed-Kalantari, Lauren Oakden-Rayner, Leo Anthony Celi, Li-Ching Chen, Lyle J. Palmer, Marzyeh Ghassemi, Matthew P Lungren, Natalie Dullerud, Ramon Correa, Ryan Wang, Saptarshi Purkayastha, Shih-Cheng Huang Po-Chih Kuo and Zachary Zaiman for The Lancet on May 11, 2022
The MIT Technology Review has written a four-part series on how the impact of AI is “repeating the patterns of colonial history.” The Review is careful not to directly compare the current situation with the colonialist capturing of land, extraction of resources, and exploitation of people. Yet, they clearly show that AI does further enrich the wealthy at the tremendous expense of the poor.
Continue reading “Don’t miss this 4-part journalism series on ‘AI Colonialism’”In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.
Continue reading “Exploitative labour is central to the infrastructure of AI”A government leader in Argentina hailed the AI, which was fed invasive data about girls. The feminist pushback could inform the future of health tech.
By Alexa Hagerty, Diego Jemio and Florencia Aranda for WIRED on February 16, 2022
Those who could be exploited by AI should be shaping its projects.
By Pratyusha Kalluri for Nature on July 7, 2020
So long as algorithms are trained on racist historical data and outdated values, there will be no opportunities for change.
By Chris Gilliard for WIRED on January 2, 2022
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
We must curb the power of Silicon Valley and protect those who speak up about the harms of AI.
By Timnit Gebru for The Guardian on December 6, 2021
Timnit Gebru is launching Distributed Artificial Intelligence Research Institute (DAIR) to document AI’s harms on marginalized groups.
By Nitasha Tiku for Washington Post on December 2, 2021
When you or I seek out evidence to back up our existing beliefs and ignore the evidence that shows we’re wrong, it’s called “confirmation bias.” It’s a well-understood phenomenon that none of us are immune to, and thoughtful people put a lot of effort into countering it in themselves.
By Cory Doctorow for Pluralistic on December 2, 2021
Today, 30 November 2021, European Digital Rights (EDRi) and 114 civil society organisations launched a collective statement to call for an Artificial Intelligence Act (AIA) which foregrounds fundamental rights.
From European Digital Rights (EDRi) on November 30, 2021
AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.
Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.