Algorithmic power and African indigenous languages: search engine autocomplete and the global multilingual Internet

Predictive language technologies – such as Google Search’s Autocomplete – constitute forms of algorithmic power that reflect and compound global power imbalances between Western technology companies and multilingual Internet users in the global South. Increasing attention is being paid to predictive language technologies and their impacts on individual users and public discourse. However, there is a lack of scholarship on how such technologies interact with African languages. Addressing this gap, the article presents data from experimentation with autocomplete predictions/suggestions for gendered or politicised keywords in Amharic, Kiswahili and Somali. It demonstrates that autocomplete functions for these languages and how users may be exposed to harmful content due to an apparent lack of filtering of problematic ‘predictions’. Drawing on debates on algorithmic power and digital colonialism, the article demonstrates that global power imbalances manifest here not through a lack of online African indigenous language content, but rather in regard to the moderation of content across diverse cultural and linguistic contexts. This raises dilemmas for actors invested in the multilingual Internet between risks of digital surveillance and effective platform oversight, which could prevent algorithmic harms to users engaging with platforms in a myriad of languages and diverse socio-cultural and political environments.

By Peter Chonka, Stephanie Diepeveen and Yidnekachew Haile for SAGE Journals on June 22, 2022

Inventing language to avoid algorithmic censorship

Platforms like Tiktok, Twitch and Instagram use algorithmic filters to automatically block certain posts on the basis of the language they use. The Washington Post shows how this has created ‘algospeak’, a whole new vocabulary. So instead of ‘dead’ users write ‘unalive’, they use ‘SA’ instead of ‘sexual assault’, and write ‘spicy eggplant’ rather than ‘vibrator’.

Continue reading “Inventing language to avoid algorithmic censorship”

Can Outside Pressure Change Silicon Valley?

How has activism evolved in our digital society? In this episode of Sudhir Breaks the Internet, Sudhir talks to Jade Magnus Ogunnaike about the intersection of big tech and civil rights. She is a senior campaign director for Color of Change. It’s a racial justice organization that blends traditional organizing efforts with an updated playbook for how to make change.

By Jade Magnus Ogunnaike and Sudhir Venkatesh for Freakonomics on May 17, 2021

At the mercy of the TikTok algorithm?

In this article for the Markup, Dara Kerr offers an interesting insight in the plight of TikTok’ers who try to earn a living on the platform. TikTok’s algorithm, or how it decides what content gets a lot of exposure, is notoriously vague. With ever changing policies and metrics, Kerr recounts how difficult it is to build up and retain a following on the platform. This vagueness does not only create difficulty for creators trying to monetize their content, but also leaves more room for TikTok to suppress or spread content at will.

Continue reading “At the mercy of the TikTok algorithm?”

Google blocks advertisers from targeting Black Lives Matter

In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement.

Continue reading “Google blocks advertisers from targeting Black Lives Matter”

Filtering out the “Asians”

The article’s title speaks for itself, “Your iPhone’s Adult Content Filter Blocks Anything ‘Asian’”. Victoria Song has tested the claims made by The Independent: if you enable the “Limit Adult Websites” function in your iPhone’s Screen Time setting, then you are blocked from seeing any Google search results for “Asian”. Related searches such as “Asian recipes,” or “Southeast Asian,” are also blocked by the adult content filter. There is no clarity or transparency to how search terms are considered adult content or not, and whether the process is automated or done manually. Regardless of intention, the outcome and the lack of action by Google or Apple is unsurprising but disconcerting. It is far from a mistake, but rather, a feature of their commercial practices and their disregard to the social harms of their business model.

Continue reading “Filtering out the “Asians””

Hoe Zwarte Piet verdwijnt van Facebook

Moderatie: Het Facebookbeleid tegen Zwarte Piet begint behoorlijk op stoom te komen. Pro-pietenpagina’s worden hard geraakt, omdat tegenstander de berichten op deze pagina’s volop rapporteren. Toch is het de vraag of Zwarte Piet ooit helemaal van Facebook verdwijnt.

By Reinier Kist and Wilfred Takken for NRC on August 31, 2020

Facebook weigert advertentie met cover van OPZIJ met zwarte vrouw

Facebook heeft een advertentie met cover van het feministische maandblad OPZIJ offline gehaald omdat deze overeenkomsten zou vertonen met een blackface-afbeelding. Op de cover van het tijdschrift prijkt de beeltenis van Dr. Abbie Vandivere. De wetenschapper haalde de wereldpers met haar ontdekkingen tijdens de restauratie van Vermeer’s Meisje met de parel voor het Mauritshuis. Vandivere is zwart en heeft op de foto haar lippen rood geverfd.

By Mark Koster for Villamedia on August 17, 2020

Philosophers On GPT-3 (updated with replies by GPT-3)

Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On.

By Amanda Askell, Annette Zimmermann, C. Thi Nguyen, Carlos Montemayor, David Chalmers, GPT-3, Henry Shevlin, Justin Khoo, Regina Rini and Shannon Vallor for Daily Nous on July 30, 2020

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑