As we wrote earlier,, tech companies are deeply complicit in the current genocide in Gaza as well as the broader oppression in the occupied Palestinian territories.
Continue reading “Tech workers face retaliation for Palestine solidarity”Tech Workers’ Testimonies: Stories of Suppression of Palestinian Advocacy in the Workplace
The Arab Center for the Advancement of Social Media has released a new report titled, “Delete the Issue: Tech Worker Testimonies on Palestinian Advocacy and Workplace Suppression.” The report, the first of its kind, shares testimonies gathered from current and former employees in major technology companies, including Meta, Google, PayPal, Microsoft, LinkedIn, and Cisco. It highlights their experiences supporting Palestinian rights in the workplace and the companies’ efforts to restrict freedom of expression on the matter.
From 7amleh on November 11, 2024
‘Double Standards and Hypocrisy’: The Dissent at Cisco Over the War in Gaza
Employees at Cisco revealed to WIRED the challenges they’ve encountered while petitioning for the cancellation of contracts with Israel and greater recognition of the humanitarian crisis in Gaza.
By Paresh Dave for WIRED on October 30, 2024
The Hidden Ties Between Google and Amazon’s Project Nimbus and Israel’s Military
A WIRED investigation found public statements from officials detail a much closer link between Project Nimbus and Israel Defense Forces than previously reported.
By Caroline Haskins for WIRED on July 15, 2024
Google Fired Us for Protesting Its Complicity in the War on Gaza. But We Won’t Be Silenced.
We have been demanding that Google cut its ties to Israel’s apartheid government for years, and we’re not stopping now.
By Mohammad Khatami and Zelda Montes Kate Sim for The Nation on April 29, 2024
Google Workers Revolt Over $1.2 Billion Israel Contract
Two Google workers have resigned and another was fired over a project providing AI and cloud services to the Israeli government and military.
By Billy Perrigo for Time on April 10, 2024
Google Workers Protest Cloud Contract With Israel’s Government
Google employees are staging sit-ins and protests at company offices in New York and California over “Project Nimbus,” a cloud contract with Israel’s government, as the country’s war with Hamas continues.
By Caroline Haskins for WIRED on April 16, 2024
Apple Store Employees Say Coworkers Were Disciplined for Supporting Palestinians
A protest is planned Saturday at a Chicago Apple store where workers say managers disciplined staff—and fired an employee—for wearing pins, bracelets, or keffiyeh in support of Palestinian people.
By Caroline Haskins for WIRED on April 2, 2024
The datasets to train AI models need more checks for harmful and illegal materials
This Atlantic conversation between Matteo Wong and Abeba Birhane touches on some critical issues surrounding the use of large datasets to train AI models.
Continue reading “The datasets to train AI models need more checks for harmful and illegal materials”Racist Technology in Action: Outsourced labour in Nigeria is shaping AI English
Generative AI uses particular English words way more than you would expect. Even though it is impossible to know for sure that a particular text was written by AI (see here), you can say something about that in aggregate.
Continue reading “Racist Technology in Action: Outsourced labour in Nigeria is shaping AI English”TechScape: How cheap, outsourced labour in Africa is shaping AI English
Workers in Africa have been exploited first by being paid a pittance to help make chatbots, then by having their own words become AI-ese. Plus, new AI gadgets are coming for your smartphones.
By Alex Hern for The Guardian on April 16, 2024
So, Amazon’s ‘AI-powered’ cashier-free shops use a lot of … humans. Here’s why that shouldn’t surprise you
This is how these bosses get rich: by hiding underpaid, unrecognised human work behind the trappings of technology, says the writer and artist James Bridle.
By James Bridle for The Guardian on April 10, 2024
Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people
Since 2021, thousands of Amazon and Google tech workers have been organising against Project Nimbus, Google and Amazon’s shared USD$1.2 billion contract with the Israeli government and military. Since then, there has been no response from management or executive. Their organising efforts have accelerated since 7 October 2023, with the ongoing genocide on Gaza and occupied Palestinian territories by the Israeli state.
Continue reading “Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people”OpenAI’s GPT sorts resumes with a racial bias
Bloomberg did a clever experiment: they had OpenAI’s GPT rank resumes and found that it shows a gender and racial bias just on the basis of the name of the candidate.
Continue reading “OpenAI’s GPT sorts resumes with a racial bias”OpenAI GPT Sorts Resume Names With Racial Bias, Test Shows
Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone.
By Davey Alba, Leon Yin, and Leonardo Nicoletti for Bloomberg on March 8, 2024
Google Used a Black, Deaf Worker to Tout Its Diversity. Now She’s Suing for Discrimination
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
What Luddites can teach us about resisting an automated future
Opposing technology isn’t antithetical to progress.
By Tom Humberstone for MIT Technology Review on February 28, 2024
Timnit Gebru says harmful AI systems need to be stopped
The labour movement has a vital role to play and will grow in importance in 2024, says Timnit Gebru of the Distributed AI Research Institute.
By Timnit Gebru for The Economist on November 13, 2023
Data Work and its Layers of (In)visibility
No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.
By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023
Filipino workers in “digital sweatshops” train AI models for the West
The Philippines is one of the countries that has more than two million people perform crowdwork, such as data annotation, according to informal government estimates.
Continue reading “Filipino workers in “digital sweatshops” train AI models for the West”Connecting the dots between early computing, labour history, and plantations
In this accessible longread, Meredith Whittaker takes us through complex and contested 19th century histories to connect the birth of modern computing to plantation technologies and industrial labour control.
Continue reading “Connecting the dots between early computing, labour history, and plantations”Origin Stories: Plantations, Computers, and Industrial Control
The proto-Taylorist methods of worker control Charles Babbage encoded into his calculating engines have origins in plantation management.
By Meredith Whittaker for Logic on June 2, 2023
Mean Images
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter
The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
By Angelina McMillan-Major, Emily M. Bender, Margaret Mitchell and Timnit Gebru for DAIR on March 31, 2023
The AI revolution is powered by these contractors making $15 an hour
OpenAI’s contractor workforce helps power ChatGPT through simple interactions. They don’t get benefits, but some say the work is rewarding.
By David Ingram for NBC News on May 6, 2023
Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them
The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.
By Meredith Whittaker and Wilfred Chan for Fast Company on May 5, 2023
Dark reality of content moderation: Meta sued for poor work conditions
This is the third time a case has been filed against Meta and sheds light on the harsh reality of content moderation.
By Odanga Madung for Nation on March 20, 2023
We come to bury ChatGPT, not to praise it.
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in ‘The cat sat on the [BLANK]’). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it’s still a computational guessing game. ChatGPT is, in technical terms, a ‘bullshit generator’.
By Dan McQuillan for Dan McQuillan on February 6, 2023
She’s working to make German tech more inclusive
Nakeema Stefflbauer is bringing women from underrepresented backgrounds into the Berlin tech scene.
By Gouri Sharma and Nakeema Stefflbauer for MIT Technology Review on February 21, 2023
The cheap, racialised, Kenyan workers making ChatGPT “safe”
Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.
Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””Waarom ChatGPT geen racistische viespeuk is? Dankzij een stel Kenianen, voor twee dollar per uur
Je zult de populaire chatbot ChatGPT niet snel betrappen op vieze woordjes of racistische taal. Hij is keurig getraind door tientallen Kenianen. Hun taak: het algoritme leren vooral niet te beginnen over moord, marteling en verkrachting, zodat wij – de gebruikers – geen smerige drek voorgeschoteld krijgen.
By Maurits Martijn for De Correspondent on January 28, 2023
Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer
OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
Uber’s facial recognition is locking Indian drivers out of their accounts
Some drivers in India are finding their accounts permanently blocked. Better transparency of the AI technology could help gig workers.
By Varsha Bansal for MIT Technology Review on December 6, 2022
AI innovation for whom, and at whose expense?
This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.
Continue reading “AI innovation for whom, and at whose expense?”The Exploited Labor Behind Artificial Intelligence
Supporting transnational worker organizing should be at the center of the fight for “ethical AI.”
By Adrienne Williams, Milagros Miceli and Timnit Gebru for Noema on October 13, 2022
How Big Tech Is Importing India’s Caste Legacy to Silicon Valley
Graduates from the Indian Institutes of Technology are highly sought after by employers. They can also bring problems from home.
By Saritha Rai for Bloomberg on March 11, 2021
Whitewashing call centre workers’ accents
Silicon Valley strikes again, with yet another techno-solutionist idea. Sanas, a speech technology startup founded by three former Stanford students, aims to alter the accents of call centre workers situated in countries such as India and the Philippines. The goal is to make them sound white and American. With a slide of a button, a call centre’s voice will be transformed into a slightly robotic, and unmistakeably white, American voice.
Continue reading “Whitewashing call centre workers’ accents”The AI startup erasing call center worker accents: is it fighting bias – or perpetuating it?
A Silicon Valley startup offers voice-altering tech to call center workers around the world: ‘Yes, this is wrong … but a lot of things exist in the world’
By Wilfred Chan for The Guardian on August 24, 2022
Buzzy Silicon Valley startup wants to make the world sound whiter
Sanas’ service has already launched in seven call centers. But experts are concerned it could dehumanize workers.
By Joshua Bote for SFGATE on August 22, 2022
Racist Technology in Action: How hiring tools can be sexist and racist
One of the classic examples of how AI systems can reinforce social injustice is Amazon’s A.I. hiring tool. In 2014, Amazon built an ´A.I. powered´ tool to assess resumes and recommend the top candidates that would go on to be interviewed. However, the tool turned out to be very biased, systematically preferring men over women.
Continue reading “Racist Technology in Action: How hiring tools can be sexist and racist”