‘Race-blind’ content moderation disadvantages Black users

Over the past months a slew of leaks from the Facebook whistleblower, Frances Haugen, has exposed how the company was aware of the disparate and harmful impact of its content moderation practices. Most damning is that in the majority of instances, Facebook failed to address these harms. In this Washington Post piece, one of the latest of such revelations is discussed in detail: Even though Facebook knew it would come at the expense of Black users, its algorithm to detect and remove hate speech was programmed to be ‘race-blind’.

In 2018, internal researchers from Facebook set up a research project aimed at addressing the “worst of the worst” of its online content, mainly hate speech. The outcomes of this project showed that people considered hate speech directed at minorities as most harmful. Additionally, more than half of all hate speech flagged by users was directed at Black people, Muslims, members of the LGBTQ+ community and Jews.

However, the algorithm Facebook used was primarily detecting and removing hate speech aimed at White people and men while keeping racist and derogatory hate speech directed at minorities on the platform. This internal research is in line with an independent civil rights audit of Facebook, performed in the same year, that called Facebook’s content moderation policy in many instances a “tremendous setback” for civil rights.

The research team proposed to overhaul the hate speech algorithm to focus on removing content that targets the four most harassed groups on Facebook. However, Facebook decided against this as it feared a backlash from America’s far right and that these measures would interfere with its perceived neutral political stance. Instead, it preferred the algorithm to remain ‘race-blind’ even if it would come at the expense of vulnerable groups. As summarized by one of the researchers that spoke to the Washington Post:

If you don’t do something to check structural racism in your society, you’re going to always end up amplifying it and that is exactly what Facebook’s algorithms did.

The Facebook hate speech algorithm is a clear example of how “race-blindness” or “neutrality” claimed in the face of systemic social injustice often means maintaining and reproducing a racist status-quo.

See: Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show at the Washington Post.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑