Employees at Cisco revealed to WIRED the challenges they’ve encountered while petitioning for the cancellation of contracts with Israel and greater recognition of the humanitarian crisis in Gaza.
By Paresh Dave for WIRED on October 30, 2024
Employees at Cisco revealed to WIRED the challenges they’ve encountered while petitioning for the cancellation of contracts with Israel and greater recognition of the humanitarian crisis in Gaza.
By Paresh Dave for WIRED on October 30, 2024
A WIRED investigation found public statements from officials detail a much closer link between Project Nimbus and Israel Defense Forces than previously reported.
By Caroline Haskins for WIRED on July 15, 2024
We have been demanding that Google cut its ties to Israel’s apartheid government for years, and we’re not stopping now.
By Mohammad Khatami and Zelda Montes Kate Sim for The Nation on April 29, 2024
Two Google workers have resigned and another was fired over a project providing AI and cloud services to the Israeli government and military.
By Billy Perrigo for Time on April 10, 2024
Google employees are staging sit-ins and protests at company offices in New York and California over “Project Nimbus,” a cloud contract with Israel’s government, as the country’s war with Hamas continues.
By Caroline Haskins for WIRED on April 16, 2024
A protest is planned Saturday at a Chicago Apple store where workers say managers disciplined staff—and fired an employee—for wearing pins, bracelets, or keffiyeh in support of Palestinian people.
By Caroline Haskins for WIRED on April 2, 2024
This Atlantic conversation between Matteo Wong and Abeba Birhane touches on some critical issues surrounding the use of large datasets to train AI models.
Continue reading “The datasets to train AI models need more checks for harmful and illegal materials”Generative AI uses particular English words way more than you would expect. Even though it is impossible to know for sure that a particular text was written by AI (see here), you can say something about that in aggregate.
Continue reading “Racist Technology in Action: Outsourced labour in Nigeria is shaping AI English”Workers in Africa have been exploited first by being paid a pittance to help make chatbots, then by having their own words become AI-ese. Plus, new AI gadgets are coming for your smartphones.
By Alex Hern for The Guardian on April 16, 2024
This is how these bosses get rich: by hiding underpaid, unrecognised human work behind the trappings of technology, says the writer and artist James Bridle.
By James Bridle for The Guardian on April 10, 2024
Since 2021, thousands of Amazon and Google tech workers have been organising against Project Nimbus, Google and Amazon’s shared USD$1.2 billion contract with the Israeli government and military. Since then, there has been no response from management or executive. Their organising efforts have accelerated since 7 October 2023, with the ongoing genocide on Gaza and occupied Palestinian territories by the Israeli state.
Continue reading “Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people”Bloomberg did a clever experiment: they had OpenAI’s GPT rank resumes and found that it shows a gender and racial bias just on the basis of the name of the candidate.
Continue reading “OpenAI’s GPT sorts resumes with a racial bias”Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone.
By Davey Alba, Leon Yin, and Leonardo Nicoletti for Bloomberg on March 8, 2024
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
Opposing technology isn’t antithetical to progress.
By Tom Humberstone for MIT Technology Review on February 28, 2024
The labour movement has a vital role to play and will grow in importance in 2024, says Timnit Gebru of the Distributed AI Research Institute.
By Timnit Gebru for The Economist on November 13, 2023
No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.
By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023
The Philippines is one of the countries that has more than two million people perform crowdwork, such as data annotation, according to informal government estimates.
Continue reading “Filipino workers in “digital sweatshops” train AI models for the West”In this accessible longread, Meredith Whittaker takes us through complex and contested 19th century histories to connect the birth of modern computing to plantation technologies and industrial labour control.
Continue reading “Connecting the dots between early computing, labour history, and plantations”The proto-Taylorist methods of worker control Charles Babbage encoded into his calculating engines have origins in plantation management.
By Meredith Whittaker for Logic on June 2, 2023
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
By Angelina McMillan-Major, Emily M. Bender, Margaret Mitchell and Timnit Gebru for DAIR on March 31, 2023
OpenAI’s contractor workforce helps power ChatGPT through simple interactions. They don’t get benefits, but some say the work is rewarding.
By David Ingram for NBC News on May 6, 2023
The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.
By Meredith Whittaker and Wilfred Chan for Fast Company on May 5, 2023
This is the third time a case has been filed against Meta and sheds light on the harsh reality of content moderation.
By Odanga Madung for Nation on March 20, 2023
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in ‘The cat sat on the [BLANK]’). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it’s still a computational guessing game. ChatGPT is, in technical terms, a ‘bullshit generator’.
By Dan McQuillan for Dan McQuillan on February 6, 2023
Nakeema Stefflbauer is bringing women from underrepresented backgrounds into the Berlin tech scene.
By Gouri Sharma and Nakeema Stefflbauer for MIT Technology Review on February 21, 2023
Stories about the hidden and exploitative racialised labour which fuels the development of technologies continue to surface, and this time it is on ChatGPT. Billy Perrigo, who previously reported on Meta’s content moderation sweatshop and on whistleblower Daniel Moutang, who took Meta to court, has shed light on how OpenAI has relied upon outsourced exploitative labour in Kenya to make ChatGPT less toxic.
Continue reading “The cheap, racialised, Kenyan workers making ChatGPT “safe””Je zult de populaire chatbot ChatGPT niet snel betrappen op vieze woordjes of racistische taal. Hij is keurig getraind door tientallen Kenianen. Hun taak: het algoritme leren vooral niet te beginnen over moord, marteling en verkrachting, zodat wij – de gebruikers – geen smerige drek voorgeschoteld krijgen.
By Maurits Martijn for De Correspondent on January 28, 2023
OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
Some drivers in India are finding their accounts permanently blocked. Better transparency of the AI technology could help gig workers.
By Varsha Bansal for MIT Technology Review on December 6, 2022
This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.
Continue reading “AI innovation for whom, and at whose expense?”Supporting transnational worker organizing should be at the center of the fight for “ethical AI.”
By Adrienne Williams, Milagros Miceli and Timnit Gebru for Noema on October 13, 2022
Graduates from the Indian Institutes of Technology are highly sought after by employers. They can also bring problems from home.
By Saritha Rai for Bloomberg on March 11, 2021
Silicon Valley strikes again, with yet another techno-solutionist idea. Sanas, a speech technology startup founded by three former Stanford students, aims to alter the accents of call centre workers situated in countries such as India and the Philippines. The goal is to make them sound white and American. With a slide of a button, a call centre’s voice will be transformed into a slightly robotic, and unmistakeably white, American voice.
Continue reading “Whitewashing call centre workers’ accents”A Silicon Valley startup offers voice-altering tech to call center workers around the world: ‘Yes, this is wrong … but a lot of things exist in the world’
By Wilfred Chan for The Guardian on August 24, 2022
Sanas’ service has already launched in seven call centers. But experts are concerned it could dehumanize workers.
By Joshua Bote for SFGATE on August 22, 2022
One of the classic examples of how AI systems can reinforce social injustice is Amazon’s A.I. hiring tool. In 2014, Amazon built an ´A.I. powered´ tool to assess resumes and recommend the top candidates that would go on to be interviewed. However, the tool turned out to be very biased, systematically preferring men over women.
Continue reading “Racist Technology in Action: How hiring tools can be sexist and racist”In 2019, a Facebook content moderator in Nairobi, Daniel Motaung, who was paid USD 2.20 per hour, was fired. He was working for one of Meta’s largest outsourcing partners in Africa, Sama, which brands itself as an “ethical AI” outsourcing company, and is headquartered in California. Motaung led a unionisation attempt with more than 100 colleagues, fighting for better wages and working conditions.
Continue reading “Exploited and silenced: Meta’s Black whistleblower in Nairobi”A Facebook lawyer called on a judge to “crack the whip” against a whistleblower who accuses the company of forced labor and human trafficking.
By Billy Perrigo for Time on July 1, 2022
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.