Platform rules often subject marginalized communities to heightened scrutiny while providing them with too little protection from harm.
Looking at these platforms’ rules for hate speech, terrorist content and harassment, the researchers clearly demonstrate how the viewpoints of communities of color, women, LGBTQ+ communities, and religious minorities are at risk of over-enforcement, while harms targeting them often remain unaddressed.
For example, vague rules targeting terrorist content lead to disproportional over-removal of content from muslims or Arabic speakers while rules on white supremacist groups are much more narrow and lead to far fewer removals. The report proposes several recommendations such as increased transparency, appeals procedures and oversight, targeting both the platforms themselves and legislators.
See: Double Standards in Social Media Content Moderation at the Brennan Center for Justice.