This fantastic article by Williams, Miceli and Gebru, describes how the methodological shift of AI systems to deep-learning-based models has required enormous amounts of “data” for models to learn from. Large volumes of time-consuming work, such as labelling millions of images, can now be broken down into smaller tasks and outsourced to data labourers across the globe. These data labourers have terribly low wagen, often working in dire working conditions.Continue reading “AI innovation for whom, and at whose expense?”
Capitol Music Group faced a backlash for signing the artificial intelligence musician.
From BBC on August 24, 2022
Figuring out social media platforms’ hidden rules is hard work—and it falls more heavily on creators from marginalized backgrounds.
By Abby Ohlheiser for MIT Technology Review on July 14, 2022
Predictive language technologies – such as Google Search’s Autocomplete – constitute forms of algorithmic power that reflect and compound global power imbalances between Western technology companies and multilingual Internet users in the global South. Increasing attention is being paid to predictive language technologies and their impacts on individual users and public discourse. However, there is a lack of scholarship on how such technologies interact with African languages. Addressing this gap, the article presents data from experimentation with autocomplete predictions/suggestions for gendered or politicised keywords in Amharic, Kiswahili and Somali. It demonstrates that autocomplete functions for these languages and how users may be exposed to harmful content due to an apparent lack of filtering of problematic ‘predictions’. Drawing on debates on algorithmic power and digital colonialism, the article demonstrates that global power imbalances manifest here not through a lack of online African indigenous language content, but rather in regard to the moderation of content across diverse cultural and linguistic contexts. This raises dilemmas for actors invested in the multilingual Internet between risks of digital surveillance and effective platform oversight, which could prevent algorithmic harms to users engaging with platforms in a myriad of languages and diverse socio-cultural and political environments.
By Peter Chonka, Stephanie Diepeveen and Yidnekachew Haile for SAGE Journals on June 22, 2022
In his New York Times article, Mike Isaac describes how Meta is implementing a new system to automatically check whether the housing, employment and credit ads it hosts are shown to people equally. This is a move following a 111,054 US dollar fine the US Justice Department has issued Meta because its ad systems have been shown to discriminate its users by, amongst other things, excluding black people from seeing certain housing ads in predominately white neighbourhoods. This is the outcome of a long process, which we have written about previously.Continue reading “Meta forced to change its advertisement algorithm to address algorithmic discrimination”
An alarming report outlines an extensive pattern of racial discrimination within the city’s police department.
By Sam Richards and Tate Ryan-Mosley for MIT Technology Review on April 27, 2022
Moeten we het internet blijven ‘aanpassen’ en ‘repareren’, of is het tijd om ons onze samenleving radicaal opnieuw voor te stellen en grondig te evalueren hoe het internet ons allemaal van dienst kan zijn?
By Nani Jansen Reventlow for De Groene Amsterdammer on April 9, 2022
The Racism and Technology Center co-signed an open letter asking the EU member states to make sure that the upcoming Digital Services Act will abolish so-called ‘dark patterns’ and advertising that is based on tracking and harvesting personal data.Continue reading “72 civil society organisations to the EU: “Abolish tracking-based online advertising””
A conversation about the unholy trinity of whiteness, modernity, and capitalism.
By André Brock for Logic on December 25, 2021
Around 2016 Facebook was still proud of its ability to target to “Black affinity” and “White affinity” adiences for the ads of their customers. I then wrote an op-ed decrying this form of racial profiling that was enabled by Facebook’s data lust.Continue reading “Facebook has finally stopped enabling racial profiling for targeted advertising”
Over the past months a slew of leaks from the Facebook whistleblower, Frances Haugen, has exposed how the company was aware of the disparate and harmful impact of its content moderation practices. Most damning is that in the majority of instances, Facebook failed to address these harms. In this Washington Post piece, one of the latest of such revelations is discussed in detail: Even though Facebook knew it would come at the expense of Black users, its algorithm to detect and remove hate speech was programmed to be ‘race-blind’.Continue reading “‘Race-blind’ content moderation disadvantages Black users”
Voyager, which pitches its tech to police, has suggested indicators such as Instagram usernames that show Arab pride can signal inclination towards extremism.
By Johana Bhuiyan and Sam Levin for The Guardian on November 17, 2021
Researchers proposed a fix to the biased algorithm, but one internal document predicted pushback from ‘conservative partners’.
By Craig Timberg, Elizabeth Dwoskin and Nitasha Tiku for Washington Post on November 21, 2021
Facebook Inc said on Tuesday it plans to remove detailed ad-targeting options that refer to “sensitive” topics, such as ads based on interactions with content around race, health, religious practices, political beliefs or sexual orientation.
By Elizabeth Culliford for Reuters on November 9, 2021
Our very own Naomi Appelman was interviewed for Atlas, a Dutch television show about science and current affairs. She talked about her research into what laws and regulations democracies should develop to ensure that large technology companies don’t unnecessarily exclude people.Continue reading “Regulating big tech to make sure nobody is excluded”
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”
For as long as I can remember, I’ve felt the duty of being that woman who sits in a meeting room in London, Geneva, New York, Berlin and Paris and talks about what digital rights mean for not just people of colour in Europe and North America, but across the rest of the world. Approximately 84% of the world’s poor live in South Asia and sub-Saharan Africa, and the digital divide remains steep but that’s only part of the story. These aren’t passive consumers of the web. They’re active prosumers. TikTok has been downloaded over 360 million times in South East Asia, a region of 658 million people. With social platforms, anyone with a phone can become a star, make money, connect with others, build a family of choice and acceptance, fall in love, and live a life they may not be allowed otherwise.
By Hera Hussain for Who Writes The Rules on August 23, 2021
Since 2017, the issue of online violence against women and girls has increasingly crept up the EU political agenda. Thanks to the collective work of inspirational activists, I have the honour to work side-by-side with, making sure that the reality of the persistent harms racialised and marginalised women face is recognised as a marked win. This has not been without its challenges, particularly speaking as a young Black woman advocate in the Brussels political Bubble.
By Asha Allen for Who Writes The Rules on August 23, 2021
I was once such a passionate advocate of the web that I made it my business to preach the gospel to my then skeptical friends, that technology would deliver a democratized, equitable and creatively limitless future to us all. After all, it was for everyone. And it was free. And there were no rules.
By Aina Abiodun for Who Writes The Rules on August 23, 2021
6 campaigners highlight marginalised people’s exclusion from the process of writing the rules that govern the online experience.
From People vs. Big Tech on October 4, 2021
Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.Continue reading “Big Tech is propped up by a globally exploited workforce”
Many people use filters on social media to ‘beautify’ their pictures. In this article, Tate Ryan-Mosley discusses how these beauty filters can perpetuate colorism. Colorism has a long and complicated history, but can be summarised as a preference for whiter skin as opposed to darker skin. Ryan-Mosley explains that “though related to racism, it’s distinct in that it can affect people regardless of their race, and can have different effects on people of the same background.” The harmful effects of colorism, ranging from discrimination to mental health issues or the use of toxic skin-lightening products, are found across races and cultures.Continue reading “Photo filters are keeping colorism alive”
Big tech relies on the victims of economic collapse.
By Phil Jones for Rest of World on September 22, 2021
Twitter outrage over image search results of black and white teens is misdirected. We must address the prejudice that feeds such negative portrayals.
By Antoine Allen for The Guardian on June 10, 2016
Facebook called it “an unacceptable error.” The company has struggled with other issues related to race.
By Ryan Mac for The New York Times on September 3, 2021
Low uptake of ‘Smart Pricing’ feature among black hosts increased earnings gap.
By Dave Lee and Madhumita Murgia for Financial Times on May 13, 2021
We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.Continue reading “Proof for Twitter’s bias toward lighter faces”
Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).Continue reading “Uber-racist: Racial discrimination in dynamic pricing algorithms”
Digital photo editing tools on apps like TikTok, Snapchat and Instagram are upholding warped beauty standards—and hurting people of color.
By Tate Ryan-Mosley for MIT Technology Review on August 15, 2021
Time and time again, big tech companies have shown their ability and power to (mis)represent and (re)shape our digital world. From speech, to images, and most recently, to the emojis that we regularly use.Continue reading “Racist Technology in Action: Apple’s emoji keyboard reinforces Western stereotypes”
Platform rules often subject marginalized communities to heightened scrutiny while providing them with too little protection from harm.
By Laura Hecht-Felella and Ángel Díaz for Brennan Center for Justice on April 8, 2021
Facebook, Twitter, Instagram, YouTube and TikTok failing to act on most reported anti-Jewish posts, says study.
By Maya Wolfe-Robinson for The Guardian on August 1, 2021
The Plug and Fast Company looked at what happened to the 3.8 billion dollars that US-based tech companies committed to diversity, equity, and inclusion as their response to the Black Lives Matter protests.Continue reading “Tech companies poured 3.8 billion USD into racial justice, but to what avail?”
Some refuse to choreograph Megan Thee Stallion song, highlighting how white users get credit for Black creativity.
By Kari Paul for The Guardian on June 24, 2021
A year ago, as our lives were being upended by the pandemic, Black Americans were simultaneously processing the emotional weight and tragedy of the murders of George Floyd, Breonna Taylor, Ahmaud Arbery, and others whose lives were cut short due to police brutality. The world watched as protest after protest erupted across the country over the summer of 2020. But, unlike previous collective actions, this moment felt different. Big Tech and corporate America—predominantly white environments—broke their silence. Companies started pledging to do things differently, claiming they would doggedly support Black workers, Black organizations, and Black companies via investments, donations, and hiring pledges. At The Plug, a subscription news and insights platform covering the Black innovation economy, we quickly began documenting the commitments made by tech CEOs, cross-referencing them with data points of what Black representation looked like across their workforces and boards. (You can view the original spreadsheet here.) A year later, we’re proud to continue that work, in partnership with Fast Company. Together we set out to try to understand—through data and first-person accounts—if anything really changed. How have the lives of Black tech workers, users, and citizens been altered by the bold commitments these companies made?
From Fast Company on June 16, 2021
Surveillance expert Chris Gilliard reflects on 2020’s racial justice protests, the hypocrisy of tech companies’ commitments, and where we are one year later.
By Chris Gilliard and Katharine Schwab for Fast Company on June 16, 2021
The feature associates “Africa” with the hut emoji and “China” with the dog emoji.
By Andrew Deck for Rest of World on June 15, 2021
Online dating platforms often provide a safe space for racist attitudes.
By Brady Robards, Bronwyn Carlson and Gene Lim for The Conversation on June 7, 2020