A new report purports to reveal for the first time Facebook’s secret content removal policies.
Excerpts of internal documents that the company allegedly hands out to both its own staff and third-party content moderators were provided to German newspaper Süddeutsche Zeitung by unidentified sources. Although Facebook’s website touches upon its guidelines, the information in the documents offers much more detail. The chapter that stands out covers Facebook’s stance on hate speech, an issue of contention in Germany, where the social network is currently facing a lawsuit over its alleged inaction on the matter.
The documents reveal a convoluted hate-speech policy that contains a number of loopholes resulting from the criteria Facebook uses to determine what constitutes hateful rhetoric.
Facebook apparently does not permit “verbal attacks” on a “protected category,” for example. These self-determined categories are currently based on a number of factors, including sex, religious affiliation, gender, race, ethnicity, sexual orientation, national origin, disability, or serious illness. Some of these groups contain subcategories that receive extra protection (for example, under “age,” criteria such as “youth” and “senior citizen” receive priority).
An overview at the end of the hate speech chapter is where things start to get a bit muddled. A sentence reportedly containing an expletive directly followed by a reference to a religious affiliation (for example: “f*cking Muslims”) is not allowed. However, the same does not go for the term “migrants,” as migrants are allegedly only a “quasi-protected category.” Additionally, Facebook reportedly allows for posts that could be deemed hateful against migrants under certain circumstances. For example, a statement such as “migrants are dirty” is allowed, whereas “migrants are dirt” isn’t.
We reached out to Facebook to verify the accuracy of the documents, but did not immediately receive a response. If the documents do turn out to be official, the examples above could raise alarm bells for German authorities; the term “migrants” was added to the list of criteria only following public pressure in the country. Earlier this week, German Justice Minister Heiko Maas also urged an immediate crackdown on hate speech disseminated through social media sites, such as Facebook.
A related report in the same German daily provides an in-depth look at the inner workings of Facebook’s Berlin-based content moderation team. In it, several members of the company’s 600-strong staff (which also includes employees outsourced from a Bertelsmann business services unit) claim to have suffered psychological issues as a result of the material they were exposed to. “I’ve seen things that made me seriously question my faith in humanity,” said one anonymous worker. The report claims that the troubled workers were not provided access to professional help.
Another employee describes the tortuous guidelines Facebook allegedly has in place: “The rules are almost impossible to understand. I’ve said to my team leader: this is crazy! The picture is full of blood and brutality, no one should have to see that. But he said: that’s just your opinion. You have to try and think about what Facebook wants. We’re expected to think like machines.”
- Facebook’s content moderators break their silence on terrifying work conditions
- Facebook launches new changes against hate and discrimination. Are they enough?
- Twitter finally banned hate speech against religious groups. Will it help?
- YouTube purges extremist videos, from flat-earthers to Holocaust denial
- A new Senate bill would fundamentally change the internet as we know it