Social media content moderation has double standards

The policies social media platforms use to decide what content they allow are rife with double standards, further marginalising vulnerable groups. Ángel Díaz and Laura Hecht-Felella demonstrate this in their report for the Brenan Center for Justice.

They focus on the content moderation policies and their enforcement on Facebook, YouTube and Twitter. Taking a close look at how these platforms operate, the report concludes that:

Platform rules often subject marginalized communities to heightened scrutiny while providing them with too little protection from harm.

Looking at these platforms’ rules for hate speech, terrorist content and harassment, the researchers clearly demonstrate how the viewpoints of communities of color, women, LGBTQ+ communities, and religious minorities are at risk of over-enforcement, while harms targeting them often remain unaddressed.

For example, vague rules targeting terrorist content lead to disproportional over-removal of content from muslims or Arabic speakers while rules on white supremacist groups are much more narrow and lead to far fewer removals. The report proposes several recommendations such as increased transparency, appeals procedures and oversight, targeting both the platforms themselves and legislators.

See: Double Standards in Social Media Content Moderation at the Brennan Center for Justice.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑