Racist Technology in Action: Meta systemically censors and silences Palestinian content globally

The censorship and silencing of Palestinian voices, and voices of those who support Palestine, is not new. However, since the escalation of Israel’s violence on the Gaza strip since 7 October 2023, the scale of censorship has significantly heightened, particular on social media platforms such as Instagram and Facebook. In December 2023, Human Rights Watch (HRW) released a 51-page report*, stating that Meta has engaged in systematic and global censorship of content related to Palestine since October 7th.

Continue reading “Racist Technology in Action: Meta systemically censors and silences Palestinian content globally”

Hoeveel genocides zal Meta nog faciliteren?

Al jaren censureert Meta de communicatie van Palestijnen en communicatie over de Palestijnse zaak. Toch is dat niet (alleen) een “Big Tech-probleem”. Het beleid van Meta is onder druk van onder andere overheden tot stand gekomen. Diezelfde overheden kiezen er nu voor om Meta niet te bevragen over haar rol in de mogelijke genocide op Palestijnen.

By Evely Austin and Nadia Benaissa for Bits of Freedom on November 3, 2023

­Data Work and its Layers of (In)visibility

No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.

By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023

Mean Images

An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.

By Hito Steyerl for New Left Review on April 28, 2023

The Whiteness of Mastodon

A conversation with Dr. Johnathan Flowers about Elon Musk’s changes at Twitter and the dynamics on Mastodon, the decentralized alternative.

By Johnathan Flowers and Justin Hendrix for Tech Policy Press on November 23, 2022

Algorithmic power and African indigenous languages: search engine autocomplete and the global multilingual Internet

Predictive language technologies – such as Google Search’s Autocomplete – constitute forms of algorithmic power that reflect and compound global power imbalances between Western technology companies and multilingual Internet users in the global South. Increasing attention is being paid to predictive language technologies and their impacts on individual users and public discourse. However, there is a lack of scholarship on how such technologies interact with African languages. Addressing this gap, the article presents data from experimentation with autocomplete predictions/suggestions for gendered or politicised keywords in Amharic, Kiswahili and Somali. It demonstrates that autocomplete functions for these languages and how users may be exposed to harmful content due to an apparent lack of filtering of problematic ‘predictions’. Drawing on debates on algorithmic power and digital colonialism, the article demonstrates that global power imbalances manifest here not through a lack of online African indigenous language content, but rather in regard to the moderation of content across diverse cultural and linguistic contexts. This raises dilemmas for actors invested in the multilingual Internet between risks of digital surveillance and effective platform oversight, which could prevent algorithmic harms to users engaging with platforms in a myriad of languages and diverse socio-cultural and political environments.

By Peter Chonka, Stephanie Diepeveen and Yidnekachew Haile for SAGE Journals on June 22, 2022

Inventing language to avoid algorithmic censorship

Platforms like Tiktok, Twitch and Instagram use algorithmic filters to automatically block certain posts on the basis of the language they use. The Washington Post shows how this has created ‘algospeak’, a whole new vocabulary. So instead of ‘dead’ users write ‘unalive’, they use ‘SA’ instead of ‘sexual assault’, and write ‘spicy eggplant’ rather than ‘vibrator’.

Continue reading “Inventing language to avoid algorithmic censorship”

Can Outside Pressure Change Silicon Valley?

How has activism evolved in our digital society? In this episode of Sudhir Breaks the Internet, Sudhir talks to Jade Magnus Ogunnaike about the intersection of big tech and civil rights. She is a senior campaign director for Color of Change. It’s a racial justice organization that blends traditional organizing efforts with an updated playbook for how to make change.

By Jade Magnus Ogunnaike and Sudhir Venkatesh for Freakonomics on May 17, 2021

At the mercy of the TikTok algorithm?

In this article for the Markup, Dara Kerr offers an interesting insight in the plight of TikTok’ers who try to earn a living on the platform. TikTok’s algorithm, or how it decides what content gets a lot of exposure, is notoriously vague. With ever changing policies and metrics, Kerr recounts how difficult it is to build up and retain a following on the platform. This vagueness does not only create difficulty for creators trying to monetize their content, but also leaves more room for TikTok to suppress or spread content at will.

Continue reading “At the mercy of the TikTok algorithm?”

Google blocks advertisers from targeting Black Lives Matter

In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement.

Continue reading “Google blocks advertisers from targeting Black Lives Matter”

Filtering out the “Asians”

The article’s title speaks for itself, “Your iPhone’s Adult Content Filter Blocks Anything ‘Asian’”. Victoria Song has tested the claims made by The Independent: if you enable the “Limit Adult Websites” function in your iPhone’s Screen Time setting, then you are blocked from seeing any Google search results for “Asian”. Related searches such as “Asian recipes,” or “Southeast Asian,” are also blocked by the adult content filter. There is no clarity or transparency to how search terms are considered adult content or not, and whether the process is automated or done manually. Regardless of intention, the outcome and the lack of action by Google or Apple is unsurprising but disconcerting. It is far from a mistake, but rather, a feature of their commercial practices and their disregard to the social harms of their business model.

Continue reading “Filtering out the “Asians””

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑