Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
A conversation about the #PlayVicious Mastodon instance.
By Marcia X and Ra’il I’Nasah Kiam for Logic on December 13, 2023
Board describes the two videos as important for ‘informing the world about human suffering on both sides’.
By Blake Montgomery for The Guardian on December 19, 2023
The censorship and silencing of Palestinian voices, and voices of those who support Palestine, is not new. However, since the escalation of Israel’s violence on the Gaza strip since 7 October 2023, the scale of censorship has significantly heightened, particular on social media platforms such as Instagram and Facebook. In December 2023, Human Rights Watch (HRW) released a 51-page report*, stating that Meta has engaged in systematic and global censorship of content related to Palestine since October 7th.
Continue reading “Racist Technology in Action: Meta systemically censors and silences Palestinian content globally”The labour movement has a vital role to play and will grow in importance in 2024, says Timnit Gebru of the Distributed AI Research Institute.
By Timnit Gebru for The Economist on November 13, 2023
By now we know that generative image AI reproduces and amplifies sexism, racism, and other social systems of oppression. The latest example is of AI-generated stickers in WhatsApp that systematically depict Palestinian men and boys with rifles and guns.
Continue reading “Racist Technology in Action: Generative/ing AI Bias”By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’.
By Johana Bhuiyan for The Guardian on November 3, 2023
Al jaren censureert Meta de communicatie van Palestijnen en communicatie over de Palestijnse zaak. Toch is dat niet (alleen) een “Big Tech-probleem”. Het beleid van Meta is onder druk van onder andere overheden tot stand gekomen. Diezelfde overheden kiezen er nu voor om Meta niet te bevragen over haar rol in de mogelijke genocide op Palestijnen.
By Evely Austin and Nadia Benaissa for Bits of Freedom on November 3, 2023
A report commission by Meta — Facebook and Instagram’s parent company — found bias against Palestinians during an Israeli assault last May.
By Sam Biddle for The Intercept on September 21, 2022
No technology has seemingly steam-rolled through every industry and over every community the way artificial intelligence (AI) has in the past decade. Many speak of the inevitable crisis that AI will bring. Others sing its praises as a new Messiah that will save us from the ails of society. What the public and mainstream media hardly ever discuss is that AI is a technology that takes its cues from humans. Any present or future harms caused by AI are a direct result of deliberate human decisions, with companies prioritizing record profits, in an attempt to concentrate power by convincing the world that technology is the only solution to societal problems.
By Adrienne Williams and Milagros Miceli for Just Tech on September 6, 2023
Tech companies acknowledge machine-learning algorithms can perpetuate discrimination and need improvement.
By Zachary Small for The New York Times on July 4, 2023
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
A report validated Palestinian experiences of social media censorship in May 2021, but missed how those policies are biased by design.
By Marwa Fatafta for +972 Magazine on October 9, 2022
This is the third time a case has been filed against Meta and sheds light on the harsh reality of content moderation.
By Odanga Madung for Nation on March 20, 2023
OpenAI used outsourced workers in Kenya earning less than $2 per hour to scrub toxicity from ChatGPT.
By Billy Perrigo for Time on January 18, 2023
Concerned about how seeing images of Black people dead and dying would affect young social media users, I conducted a study to understand how digitally mediated traumas were impacting Black girls’ mental and emotional wellness.
By Tiera Tanksley for SAGE Perspectives on January 4, 2023
Now that Elon Musk has bought the social media platform, some users fear a unique form of Black witnessing will be lost.
By Yusra Farzan for The Guardian on November 30, 2022
A conversation with Dr. Johnathan Flowers about Elon Musk’s changes at Twitter and the dynamics on Mastodon, the decentralized alternative.
By Johnathan Flowers and Justin Hendrix for Tech Policy Press on November 23, 2022
The open-source social network gained millions of new users following Twitter’s takeover. While some of its features could improve the quality of public discourse, disadvantaged communities might be excluded.
By Nicolas Kayser-Bril for AlgorithmWatch on November 30, 2022
Revised online safety bill proposes fines of 10% of revenue but drops harmful communications offence.
By Dan Milmo for The Guardian on November 28, 2022
Supporting transnational worker organizing should be at the center of the fight for “ethical AI.”
By Adrienne Williams, Milagros Miceli and Timnit Gebru for Noema on October 13, 2022
Figuring out social media platforms’ hidden rules is hard work—and it falls more heavily on creators from marginalized backgrounds.
By Abby Ohlheiser for MIT Technology Review on July 14, 2022
Predictive language technologies – such as Google Search’s Autocomplete – constitute forms of algorithmic power that reflect and compound global power imbalances between Western technology companies and multilingual Internet users in the global South. Increasing attention is being paid to predictive language technologies and their impacts on individual users and public discourse. However, there is a lack of scholarship on how such technologies interact with African languages. Addressing this gap, the article presents data from experimentation with autocomplete predictions/suggestions for gendered or politicised keywords in Amharic, Kiswahili and Somali. It demonstrates that autocomplete functions for these languages and how users may be exposed to harmful content due to an apparent lack of filtering of problematic ‘predictions’. Drawing on debates on algorithmic power and digital colonialism, the article demonstrates that global power imbalances manifest here not through a lack of online African indigenous language content, but rather in regard to the moderation of content across diverse cultural and linguistic contexts. This raises dilemmas for actors invested in the multilingual Internet between risks of digital surveillance and effective platform oversight, which could prevent algorithmic harms to users engaging with platforms in a myriad of languages and diverse socio-cultural and political environments.
By Peter Chonka, Stephanie Diepeveen and Yidnekachew Haile for SAGE Journals on June 22, 2022
A Facebook lawyer called on a judge to “crack the whip” against a whistleblower who accuses the company of forced labor and human trafficking.
By Billy Perrigo for Time on July 1, 2022
Platforms like Tiktok, Twitch and Instagram use algorithmic filters to automatically block certain posts on the basis of the language they use. The Washington Post shows how this has created ‘algospeak’, a whole new vocabulary. So instead of ‘dead’ users write ‘unalive’, they use ‘SA’ instead of ‘sexual assault’, and write ‘spicy eggplant’ rather than ‘vibrator’.
Continue reading “Inventing language to avoid algorithmic censorship”To avoid angering the almighty algorithm, people are creating a new vocabulary.
By Taylor Lorenz for Washington Post on April 8, 2022
Platform rules often subject marginalized communities to heightened scrutiny while providing them with too little protection from harm.
By Laura Hecht-Felella and Ángel Díaz for Brennan Center for Justice on April 8, 2021
Facebook, Twitter, Instagram, YouTube and TikTok failing to act on most reported anti-Jewish posts, says study.
By Maya Wolfe-Robinson for The Guardian on August 1, 2021
Group publishing archival photos claims images showing traditional dress or ceremonies were deleted for allegedly containing nudity.
By Mostafa Rachwani for The Guardian on May 27, 2021
How has activism evolved in our digital society? In this episode of Sudhir Breaks the Internet, Sudhir talks to Jade Magnus Ogunnaike about the intersection of big tech and civil rights. She is a senior campaign director for Color of Change. It’s a racial justice organization that blends traditional organizing efforts with an updated playbook for how to make change.
By Jade Magnus Ogunnaike and Sudhir Venkatesh for Freakonomics on May 17, 2021
In this article for the Markup, Dara Kerr offers an interesting insight in the plight of TikTok’ers who try to earn a living on the platform. TikTok’s algorithm, or how it decides what content gets a lot of exposure, is notoriously vague. With ever changing policies and metrics, Kerr recounts how difficult it is to build up and retain a following on the platform. This vagueness does not only create difficulty for creators trying to monetize their content, but also leaves more room for TikTok to suppress or spread content at will.
Continue reading “At the mercy of the TikTok algorithm?”A secretive algorithm that’s constantly being tweaked can turn influencers’ accounts, and their prospects, upside down.
By Dara Kerr for The Markup on April 22, 2021
In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement.
Continue reading “Google blocks advertisers from targeting Black Lives Matter”The Oversight Board has upheld Facebook’s decision to remove specific content that violated the express prohibition on posting caricatures of Black people in the form of blackface, contained in its Hate Speech Community Standard.
From Oversight Board on April 13, 2021
Het weren van beelden van Zwarte Piet past in het beleid van Facebook om racistische blackface-stereotypen op zijn platforms tegen te gaan. Dat oordeelt een externe raad bij wie gebruikers en Facebook zelf kunnen toetsen of iets terecht wordt verwijderd of niet.
By Pieter Sabel for Volkskrant on April 13, 2021
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
The left must vie for control over the algorithms, data, and infrastructure that shape our lives.
By Meredith Whittaker and Nantina Vgontzas for The Nation on January 29, 2021
The article’s title speaks for itself, “Your iPhone’s Adult Content Filter Blocks Anything ‘Asian’”. Victoria Song has tested the claims made by The Independent: if you enable the “Limit Adult Websites” function in your iPhone’s Screen Time setting, then you are blocked from seeing any Google search results for “Asian”. Related searches such as “Asian recipes,” or “Southeast Asian,” are also blocked by the adult content filter. There is no clarity or transparency to how search terms are considered adult content or not, and whether the process is automated or done manually. Regardless of intention, the outcome and the lack of action by Google or Apple is unsurprising but disconcerting. It is far from a mistake, but rather, a feature of their commercial practices and their disregard to the social harms of their business model.
Continue reading “Filtering out the “Asians””Facebook placed a number of leftwing organizers on a restricted list during Biden’s inauguration. It’s part of a much bigger problem.
By Akin Olla for The Guardian on January 29, 2021
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.