As we wrote earlier,, tech companies are deeply complicit in the current genocide in Gaza as well as the broader oppression in the occupied Palestinian territories.
Continue reading “Tech workers face retaliation for Palestine solidarity”Tech Workers’ Testimonies: Stories of Suppression of Palestinian Advocacy in the Workplace
The Arab Center for the Advancement of Social Media has released a new report titled, “Delete the Issue: Tech Worker Testimonies on Palestinian Advocacy and Workplace Suppression.” The report, the first of its kind, shares testimonies gathered from current and former employees in major technology companies, including Meta, Google, PayPal, Microsoft, LinkedIn, and Cisco. It highlights their experiences supporting Palestinian rights in the workplace and the companies’ efforts to restrict freedom of expression on the matter.
From 7amleh on November 11, 2024
Digital Apartheid in Gaza: Unjust Content Moderation at the Request of Israel’s Cyber Unit
Government involvement in content moderation raises serious human rights concerns in every context. Since October 7, social media platforms have been challenged for the unjustified takedowns of pro-Palestinian content—sometimes at the request of the Israeli government—and a simultaneous failure to remove hate speech towards Palestinians. More specifically, social media platforms have worked with the Israeli Cyber Unit—a government office set up to issue takedown requests to platforms—to remove content considered as incitement to violence and terrorism, as well as any promotion of groups widely designated as terrorists.
By Jillian C. York and Paige Collings for Electronic Frontier Foundation (EFF) on July 26, 2024
Racism, misogyny, lies: how did X become so full of hatred? And is it ethical to keep using it?
Ever since Elon Musk took over Twitter, I and many others have been looking for alternatives. Who wants to share a platform with the likes of Andrew Tate and Tommy Robinson?
By Zoe Williams for The Guardian on September 5, 2024
War, Memes, Art, Protest, and Porn: Jail(break)ing Synthetic Imaginaries Under OpenAI ’s Content Policy Restrictions
Using the method of jail(break)ing to study how the visualities of sensitive issues transform under the gaze of OpenAI ’s GPT 4o, we found that: -Jail(break)ing takes place when the prompts force the model to combine jailing (transforming or fine-tuning content to comply with content restrictions) and jailbreaking (attempting to bypass or circumvent these restrictions). – Image-to-text generation allows more space for controversy than text-to-image. – Visual outputs reveal issue-specific and shared transformation patterns for charged, ambiguous, or divisive artefacts. – These patterns include foregrounding the background or ‘dressing up’ (porn), imitative disambiguation (memes), pink-washing (protest), cartoonization/anonymization (war), and exaggeration of style (art).
By Alexandra Rosca, Elena Pilipets, Energy Ng, Esmée Colbourne, Marina Loureiro, Marloes Geboers, and Riccardo Ventura for Digital Methods Initiative on August 6, 2024
Palestinian-American engineer accuses Meta of firing him over Gaza content
Ferras Hamad claims in lawsuit that Meta fired him for trying to fix bugs causing the suppression of Palestinians’ Instagram posts.
From The Guardian on June 5, 2024
Google Used a Black, Deaf Worker to Tout Its Diversity. Now She’s Suing for Discrimination
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
Blackness in the Fediverse: A Conversation with Marcia X
A conversation about the #PlayVicious Mastodon instance.
By Marcia X and Ra’il I’Nasah Kiam for Logic on December 13, 2023
Meta wrong to remove graphic Israel-Gaza videos, oversight board says
Board describes the two videos as important for ‘informing the world about human suffering on both sides’.
By Blake Montgomery for The Guardian on December 19, 2023
Racist Technology in Action: Meta systemically censors and silences Palestinian content globally
The censorship and silencing of Palestinian voices, and voices of those who support Palestine, is not new. However, since the escalation of Israel’s violence on the Gaza strip since 7 October 2023, the scale of censorship has significantly heightened, particular on social media platforms such as Instagram and Facebook. In December 2023, Human Rights Watch (HRW) released a 51-page report*, stating that Meta has engaged in systematic and global censorship of content related to Palestine since October 7th.
Continue reading “Racist Technology in Action: Meta systemically censors and silences Palestinian content globally”Timnit Gebru says harmful AI systems need to be stopped
The labour movement has a vital role to play and will grow in importance in 2024, says Timnit Gebru of the Distributed AI Research Institute.
By Timnit Gebru for The Economist on November 13, 2023
Racist Technology in Action: Generative/ing AI Bias
By now we know that generative image AI reproduces and amplifies sexism, racism, and other social systems of oppression. The latest example is of AI-generated stickers in WhatsApp that systematically depict Palestinian men and boys with rifles and guns.
Continue reading “Racist Technology in Action: Generative/ing AI Bias”WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’
By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’.
By Johana Bhuiyan for The Guardian on November 3, 2023
Hoeveel genocides zal Meta nog faciliteren?
Al jaren censureert Meta de communicatie van Palestijnen en communicatie over de Palestijnse zaak. Toch is dat niet (alleen) een “Big Tech-probleem”. Het beleid van Meta is onder druk van onder andere overheden tot stand gekomen. Diezelfde overheden kiezen er nu voor om Meta niet te bevragen over haar rol in de mogelijke genocide op Palestijnen.
By Evely Austin and Nadia Benaissa for Bits of Freedom on November 3, 2023
Facebook Report Concludes Company Censorship Violated Palestinian Human Rights
A report commission by Meta — Facebook and Instagram’s parent company — found bias against Palestinians during an Israeli assault last May.
By Sam Biddle for The Intercept on September 21, 2022
Data Work and its Layers of (In)visibility
No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.
By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023
Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History
Tech companies acknowledge machine-learning algorithms can perpetuate discrimination and need improvement.
By Zachary Small for The New York Times on July 4, 2023
Mean Images
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
Meta’s clampdown on Palestine speech is far from ‘unintentional’
A report validated Palestinian experiences of social media censorship in May 2021, but missed how those policies are biased by design.
By Marwa Fatafta for +972 Magazine on October 9, 2022
Dark reality of content moderation: Meta sued for poor work conditions
This is the third time a case has been filed against Meta and sheds light on the harsh reality of content moderation.
By Odanga Madung for Nation on March 20, 2023
Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer
OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
When Black Death Goes Viral: How Algorithms of Oppression (Re)Produce Racism and Racial Trauma
Concerned about how seeing images of Black people dead and dying would affect young social media users, I conducted a study to understand how digitally mediated traumas were impacting Black girls’ mental and emotional wellness.
By Tiera Tanksley for SAGE Perspectives on January 4, 2023
What will happen next for Black Twitter?
Now that Elon Musk has bought the social media platform, some users fear a unique form of Black witnessing will be lost.
By Yusra Farzan for The Guardian on November 30, 2022
The Whiteness of Mastodon
A conversation with Dr. Johnathan Flowers about Elon Musk’s changes at Twitter and the dynamics on Mastodon, the decentralized alternative.
By Johnathan Flowers and Justin Hendrix for Tech Policy Press on November 23, 2022
Mastodon could make the public sphere less toxic, but not for all
The open-source social network gained millions of new users following Twitter’s takeover. While some of its features could improve the quality of public discourse, disadvantaged communities might be excluded.
By Nicolas Kayser-Bril for AlgorithmWatch on November 30, 2022
Social media firms face big UK fines if they fail to stop sexist and racist content
Revised online safety bill proposes fines of 10% of revenue but drops harmful communications offence.
By Dan Milmo for The Guardian on November 28, 2022
The Exploited Labor Behind Artificial Intelligence
Supporting transnational worker organizing should be at the center of the fight for “ethical AI.”
By Adrienne Williams, Milagros Miceli and Timnit Gebru for Noema on October 13, 2022
How aspiring influencers are forced to fight the algorithm
Figuring out social media platforms’ hidden rules is hard work—and it falls more heavily on creators from marginalized backgrounds.
By Abby Ohlheiser for MIT Technology Review on July 14, 2022
Algorithmic power and African indigenous languages: search engine autocomplete and the global multilingual Internet
Predictive language technologies – such as Google Search’s Autocomplete – constitute forms of algorithmic power that reflect and compound global power imbalances between Western technology companies and multilingual Internet users in the global South. Increasing attention is being paid to predictive language technologies and their impacts on individual users and public discourse. However, there is a lack of scholarship on how such technologies interact with African languages. Addressing this gap, the article presents data from experimentation with autocomplete predictions/suggestions for gendered or politicised keywords in Amharic, Kiswahili and Somali. It demonstrates that autocomplete functions for these languages and how users may be exposed to harmful content due to an apparent lack of filtering of problematic ‘predictions’. Drawing on debates on algorithmic power and digital colonialism, the article demonstrates that global power imbalances manifest here not through a lack of online African indigenous language content, but rather in regard to the moderation of content across diverse cultural and linguistic contexts. This raises dilemmas for actors invested in the multilingual Internet between risks of digital surveillance and effective platform oversight, which could prevent algorithmic harms to users engaging with platforms in a myriad of languages and diverse socio-cultural and political environments.
By Peter Chonka, Stephanie Diepeveen and Yidnekachew Haile for SAGE Journals on June 22, 2022
Facebook Is Attempting to Silence a Black Whistleblower
A Facebook lawyer called on a judge to “crack the whip” against a whistleblower who accuses the company of forced labor and human trafficking.
By Billy Perrigo for Time on July 1, 2022
Inventing language to avoid algorithmic censorship
Platforms like Tiktok, Twitch and Instagram use algorithmic filters to automatically block certain posts on the basis of the language they use. The Washington Post shows how this has created ‘algospeak’, a whole new vocabulary. So instead of ‘dead’ users write ‘unalive’, they use ‘SA’ instead of ‘sexual assault’, and write ‘spicy eggplant’ rather than ‘vibrator’.
Continue reading “Inventing language to avoid algorithmic censorship”Internet ‘algospeak’ is changing our language in real time, from ‘nip nops’ to ‘le dollar bean’
To avoid angering the almighty algorithm, people are creating a new vocabulary.
By Taylor Lorenz for Washington Post on April 8, 2022
Double Standards in Social Media Content Moderation
Platform rules often subject marginalized communities to heightened scrutiny while providing them with too little protection from harm.
By Laura Hecht-Felella and Ángel Díaz for Brennan Center for Justice on April 8, 2021
A ‘safe space for racists’: antisemitism report criticises social media giants
Facebook, Twitter, Instagram, YouTube and TikTok failing to act on most reported anti-Jewish posts, says study.
By Maya Wolfe-Robinson for The Guardian on August 1, 2021
Facebook accused of ‘discriminatory and racist’ behaviour after removing historical PNG images
Group publishing archival photos claims images showing traditional dress or ceremonies were deleted for allegedly containing nudity.
By Mostafa Rachwani for The Guardian on May 27, 2021
Can Outside Pressure Change Silicon Valley?
How has activism evolved in our digital society? In this episode of Sudhir Breaks the Internet, Sudhir talks to Jade Magnus Ogunnaike about the intersection of big tech and civil rights. She is a senior campaign director for Color of Change. It’s a racial justice organization that blends traditional organizing efforts with an updated playbook for how to make change.
By Jade Magnus Ogunnaike and Sudhir Venkatesh for Freakonomics on May 17, 2021
At the mercy of the TikTok algorithm?
In this article for the Markup, Dara Kerr offers an interesting insight in the plight of TikTok’ers who try to earn a living on the platform. TikTok’s algorithm, or how it decides what content gets a lot of exposure, is notoriously vague. With ever changing policies and metrics, Kerr recounts how difficult it is to build up and retain a following on the platform. This vagueness does not only create difficulty for creators trying to monetize their content, but also leaves more room for TikTok to suppress or spread content at will.
Continue reading “At the mercy of the TikTok algorithm?”Shadow Bans, Dopamine Hits, and Viral Videos, All in the Life of TikTok Creators
A secretive algorithm that’s constantly being tweaked can turn influencers’ accounts, and their prospects, upside down.
By Dara Kerr for The Markup on April 22, 2021
Google blocks advertisers from targeting Black Lives Matter
In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement.
Continue reading “Google blocks advertisers from targeting Black Lives Matter”