Shortly after Israel escalated its violations of the ceasefire in Gaza, bombing and murdering hundreds of Palestinians, Google announced a USD$32 billion acquisition of Wiz.
Continue reading “Google acquires Israeli startup Wiz, marking a continuation of its complicity in the ongoing genocide in Gaza and the apartheid regime”Google to pay $28m to settle claims it favoured white and Asian employees
Class action lawsuit alleged company discriminated against minority background staff on pay and career opportunities.
From The Guardian on March 19, 2025
Google Calendar removes Black History Month, Pride and other cultural events
Company says listed holidays were not ‘sustainable’ for its model as tech firms roll back diversity efforts.
By Marina Dunbar for The Guardian on February 11, 2025
Can Humanity Survive AI?
With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.
By Garrison Lovely for Jacobin on January 22, 2025
Why ‘open’ AI systems are actually closed, and why this matters
This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024
Tech workers face retaliation for Palestine solidarity
As we wrote earlier,, tech companies are deeply complicit in the current genocide in Gaza as well as the broader oppression in the occupied Palestinian territories.
Continue reading “Tech workers face retaliation for Palestine solidarity”Tech Workers’ Testimonies: Stories of Suppression of Palestinian Advocacy in the Workplace
The Arab Center for the Advancement of Social Media has released a new report titled, “Delete the Issue: Tech Worker Testimonies on Palestinian Advocacy and Workplace Suppression.” The report, the first of its kind, shares testimonies gathered from current and former employees in major technology companies, including Meta, Google, PayPal, Microsoft, LinkedIn, and Cisco. It highlights their experiences supporting Palestinian rights in the workplace and the companies’ efforts to restrict freedom of expression on the matter.
From 7amleh on November 11, 2024
Tech companies’ complicity in the ongoing genocide in Gaza and Palestine
As I write this piece, an Israeli airstrike has hit makeshift tents near Al-Aqsa Martyrs Hospital in Deir al Balah, burning tents and people alive. The Israeli military bombed an aid distribution point in Jabalia, wounding 50 casualties who were waiting for flour. The entire north of Gaza has been besieged by the Israeli Occupying Forces for the past 10 days, trapping 400,000 Palestinians without food, drink, and medical supplies. Every day since last October, Israel, with the help of its western allies, intensifies its assault on Palestine, each time pushing the boundaries of what is comprehensible. There are no moral or legal boundaries Israel, and its allies, will not cross. The systematic ethnic cleansing of Palestine, which has been the basis of the settler-colonial Zionist project since its inception, has accelerated since 7th October 2023. From Palestine to Lebanon, Syria and Yemen, Israel and its allies continue their violence with impunity. Meanwhile, mainstream western news media are either silent in their reporting or complicit in abetting the ongoing destruction of the Palestinian people and the resistance.
Continue reading “Tech companies’ complicity in the ongoing genocide in Gaza and Palestine”AI zou in oorlogstijd burgerlevens sparen. In realiteit vallen er juist meer doden
Kunstmatige intelligentie zou ervoor zorgen dat er tijdens oorlogen minder burgerdoden vallen. In realiteit vallen er juist meer. Want waar mensen worden gereduceerd tot datapunten, voelt vuren al snel als objectief en correct.
By Lauren Gould, Linde Arentze, and Marijn Hoijtink for De Groene Amsterdammer on July 24, 2024
The Hidden Ties Between Google and Amazon’s Project Nimbus and Israel’s Military
A WIRED investigation found public statements from officials detail a much closer link between Project Nimbus and Israel Defense Forces than previously reported.
By Caroline Haskins for WIRED on July 15, 2024
Google Fired Us for Protesting Its Complicity in the War on Gaza. But We Won’t Be Silenced.
We have been demanding that Google cut its ties to Israel’s apartheid government for years, and we’re not stopping now.
By Mohammad Khatami and Zelda Montes Kate Sim for The Nation on April 29, 2024
Google Contract Shows Deal With Israel Defense Ministry
Google has negotiated a deeper relationship with the Israeli Ministry of Defense during the war in Gaza, a document seen by TIME shows.
By Billy Perrigo for Time on April 12, 2024
Google Workers Revolt Over $1.2 Billion Israel Contract
Two Google workers have resigned and another was fired over a project providing AI and cloud services to the Israeli government and military.
By Billy Perrigo for Time on April 10, 2024
Google Workers Protest Cloud Contract With Israel’s Government
Google employees are staging sit-ins and protests at company offices in New York and California over “Project Nimbus,” a cloud contract with Israel’s government, as the country’s war with Hamas continues.
By Caroline Haskins for WIRED on April 16, 2024
Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people
Since 2021, thousands of Amazon and Google tech workers have been organising against Project Nimbus, Google and Amazon’s shared USD$1.2 billion contract with the Israeli government and military. Since then, there has been no response from management or executive. Their organising efforts have accelerated since 7 October 2023, with the ongoing genocide on Gaza and occupied Palestinian territories by the Israeli state.
Continue reading “Tech workers demand Google and Amazon to stop their complicity in Israel’s genocide against the Palestinian people”Google Used a Black, Deaf Worker to Tout Its Diversity. Now She’s Suing for Discrimination
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
LLMs become more covertly racist with human intervention
Researchers found that certain prejudices also worsened as models grew larger.
By James O’Donnell for MIT Technology Review on March 11, 2024
Google’s mistake with Gemini
You have probably heard that Google had to suspend it’s Gemini image feature after showing people black Nazis and female popes. Well I have a simple explanation for what happened here. Namely, the folks at Google wanted to avoid an embarrassment that they’d been involved with multiple times, and seen others get involved with, namely the “pale male” dataset problem, that happens especially at tech companies dominated by white men, and ironically, especially especially at tech companies dominated by white men who are careful about privacy, because then they only collect pictures of people who give consent, which is typically people who work there! See for example this webpage, or Safiya Noble’s entire book.
By Cathy O’Neil for mathbabe on March 12, 2024
Gemini image generation got it wrong. We’ll do better.
An explanation of how the issues with Gemini’s image generation of people happened, and what we’re doing to fix it.
By Prabhakar Raghavan for The Keyword on February 23, 2024
Google’s Gemini problem will be even worse outside the U.S.
It’s hard to keep a stereotyping machine out of trouble.
By Russell Brandom for Rest of World on February 29, 2024
Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move
In a shallow attempt to do representation for representation’s sake, Google has managed to draw the ire of the right-wing internet by generating historically inaccurate and overly inclusive portraits of historical figures.
Continue reading “Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move”Representing skin tone, or Google’s hubris versus the simplicity of Crayola
Google wants to “help computers ‘see’ our world”, and one of their ways of battling how current AI and machine learning systems perpetuate biases is to introduce a more inclusive scale of skin tone, the ‘Monk Skin Tone Scale’.
Continue reading “Representing skin tone, or Google’s hubris versus the simplicity of Crayola”Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people
If this title feels like a deja-vu it is because you most likely have, in fact, seen this before (perhaps even in our newsletter). It was back in 2015 that the controversy first arose when Google released image recognition software that kept mislabelling Black people as gorillas (read here and here).
Continue reading “Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people”Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s.
Eight years after a controversy over Black people being mislabeled as gorillas by image analysis software — and despite big advances in computer vision — tech giants still fear repeating the mistake.
By Kashmir Hill and Nico Grant for The New York Times on May 22, 2023
‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases
The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI. But after decades working with technology companies, she knows all too much about discrimination.
By John Harris for The Guardian on May 22, 2023
Skin Tone Research @ Google
Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Consensus and subjectivity of skin tone annotation for ML fairness
Skin tone is an observable characteristic that is subjective, perceived differently by individuals (e.g., depending on their location or culture) and thus is complicated to annotate. That said, the ability to reliably and accurately annotate skin tone is highly important in computer vision. This became apparent in 2018, when the Gender Shades study highlighted that computer vision systems struggled to detect people with darker skin tones, and performed particularly poorly for women with darker skin tones. The study highlights the importance for computer researchers and practitioners to evaluate their technologies across the full range of skin tones and at intersections of identities. Beyond evaluating model performance on skin tone, skin tone annotations enable researchers to measure diversity and representation in image retrieval systems, dataset collection, and image generation. For all of these applications, a collection of meaningful and inclusive skin tone annotations is key.
By Candice Schumann and Gbolahan O. Olanubi for Google AI Blog on May 15, 2023
Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them
The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.
By Meredith Whittaker and Wilfred Chan for Fast Company on May 5, 2023
Is AI het buskruit van de 21ste eeuw? ‘Er zijn zeker parallellen’
Buskruit was een geweldig slimme uitvinding en kent goede én slechte toepassingen. Zullen we later op dezelfde manier naar kunstmatige intelligentie kijken?
By Claes de Vreese, Hind Dekker-Abdulaziz, Ilyaz Nasrullah, Martijn Bertisen, Nienke Schipper and Oumaima Hajri for Trouw on May 2, 2023
You Are Not a Parrot
You are not a parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.
By Elizabeth Weil and Emily M. Bender for New York Magazine on March 1, 2023
How Big Tech Is Importing India’s Caste Legacy to Silicon Valley
Graduates from the Indian Institutes of Technology are highly sought after by employers. They can also bring problems from home.
By Saritha Rai for Bloomberg on March 11, 2021
Algorithmic power and African indigenous languages: search engine autocomplete and the global multilingual Internet
Predictive language technologies – such as Google Search’s Autocomplete – constitute forms of algorithmic power that reflect and compound global power imbalances between Western technology companies and multilingual Internet users in the global South. Increasing attention is being paid to predictive language technologies and their impacts on individual users and public discourse. However, there is a lack of scholarship on how such technologies interact with African languages. Addressing this gap, the article presents data from experimentation with autocomplete predictions/suggestions for gendered or politicised keywords in Amharic, Kiswahili and Somali. It demonstrates that autocomplete functions for these languages and how users may be exposed to harmful content due to an apparent lack of filtering of problematic ‘predictions’. Drawing on debates on algorithmic power and digital colonialism, the article demonstrates that global power imbalances manifest here not through a lack of online African indigenous language content, but rather in regard to the moderation of content across diverse cultural and linguistic contexts. This raises dilemmas for actors invested in the multilingual Internet between risks of digital surveillance and effective platform oversight, which could prevent algorithmic harms to users engaging with platforms in a myriad of languages and diverse socio-cultural and political environments.
By Peter Chonka, Stephanie Diepeveen and Yidnekachew Haile for SAGE Journals on June 22, 2022
Forget sentience… the worry is that AI copies human bias
The fuss about a bot’s ‘consciousness’ obscures far more troubling concerns.
By Kenan Malik for The Guardian on June 19, 2022
Two new technology initiatives focused on (racial) justice
We are happy to see that more and more attention is being paid to how technology intersects with problems around (racial) justice. Recently two new initiatives have launched that we would like to highlight.
Continue reading “Two new technology initiatives focused on (racial) justice”For truly ethical AI, its research must be independent from big tech
We must curb the power of Silicon Valley and protect those who speak up about the harms of AI.
By Timnit Gebru for The Guardian on December 6, 2021
Google fired its star AI researcher one year ago. Now she’s launching her own institute
Timnit Gebru is launching Distributed Artificial Intelligence Research Institute (DAIR) to document AI’s harms on marginalized groups.
By Nitasha Tiku for Washington Post on December 2, 2021
Refugees help power machine learning advances at Microsoft, Facebook, and Amazon
Big tech relies on the victims of economic collapse.
By Phil Jones for Rest of World on September 22, 2021
The ‘three black teenagers’ search shows it is society, not Google, that is racist
Twitter outrage over image search results of black and white teens is misdirected. We must address the prejudice that feeds such negative portrayals.
By Antoine Allen for The Guardian on June 10, 2016
We leven helaas nog steeds in een wereld waarin huidskleur een probleem is
Papa, mag ik die huidskleur?’ Verbaasd keek ik op van de kleurplaat die ik aan het inkleuren was, om mijn dochter te zien wijzen naar een stift met een perzikachtige kleur. Of misschien had die meer de kleur van een abrikoos. Afijn, de stift had in ieder geval niet háár huidskleur. Mijn dochter mag dan wel twee tinten lichter van kleur zijn dan ik, toch is zij overduidelijk bruin.
By Ilyaz Nasrulla for Trouw on September 23, 2021
Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”