Skip to main content

Study: Facebook is skimping on moderation, and it’s harming the public

Getty Images/Digital Trends Graphic

A new report from the New York University Stern Center for Business and Human Rights alleges that Facebook and other social media companies (Twitter and YouTube are also mentioned specifically) are outsourcing too much of their moderation to third-party companies, resulting in a workforce of moderators who are treated as “second-class citizens,” doing psychologically damaging work without adequate counseling or care.

Most disturbingly, the report points out how a lax attitude toward moderation has led to “Other harms — in some cases, lethal in nature … as a result of Facebook’s failure to ensure adequate moderation for non-Western countries that are in varying degrees of turmoil. In these countries, the platform, and/or its affiliated messaging service WhatsApp, have become important means of communication and advocacy but also vehicles to incite hatred and in some instances, violence.”

The report makes a number of suggestions for how social media platforms can improve moderation, the most dramatic of which is bringing moderators on as full-time employees with salaries and appropriate benefits (including proper medical care).

In response to the study, a Facebook spokesperson said content moderators “make our platforms safer and we’re grateful for the important work that they do.”

“Our safety and security efforts are never finished, so we’re always working to do better and to provide more support for content reviewers around the world,” the spokesperson said. Facebook did not address the specific recommendations of the study.

YouTube said it has hired 10,000 people across the globe to moderate content, and said it was a critical part of their enforcement system. Moderators there are not allowed to review content for more than 5 hours per day and are offered training and wellness events like yoga classes and mindfulness sessions, according to YouTube.

A Twitter spokesperson told Digital Trends: “Twitter has made great strides to support teams engaged in content moderation, which is a pivotal part of our service. We continue to invest in a combination of our global teams and use of machine learning and automation so we can appropriately scale our work to support the public conversation.”

Digital Trends spoke to the author of the report, Paul M. Barrett, the deputy director at the NYU Center for Business and Human Rights, to discuss his findings and their implications for the future of social media. (This interview has been edited for clarity.)

Digital Trends: To start, how did you get involved in this project?

Paul Barrett: I decided to take on this project because we’ve looked at the issue of outsourcing in a number of industries, chiefly in the apparel industry, and what the consequences of that use of outsourcing are. And I thought it would be interesting to assess that in connection with the social media industry, where I think the use of outsourcing is less well understood. And I’m interested in content moderation, because to a greater degree than I think most people really imagine, content moderation is really one of the central corporate functions of a business like Facebook. And therefore, it makes it somewhat anomalous or curious that Facebook and its peers hold this activity at arm’s length and marginalize it.”

Is it well known that these companies rely mostly on outsourced moderators? Are they open about that or do they try to avoid revealing it?

I think they are somewhere between open and secretive about it. When Mark Zuckerberg has talked over the last couple of years about the vast expansion of human resources devoted to content moderation, he tends not to mention the fact that the overwhelming majority of these people are working for other companies, third-party vendors that Facebook contracts with.

So they don’t go out of their way to emphasize it. If you can get them to sit down and talk about it on the record they will, of course, concede that, yes, it is outsourced, but they really don’t want to get into the details. They don’t want to give specific numbers. And I think, generally speaking, I think it’s fair to say there’s a great deal of reluctance to talk about this.

And I think all of that is indicative of their discomfort with the fact that they’ve made this into a peripheral activity when they know that it’s actually central to keeping their business going.

Image used with permission by copyright holder

How much oversight does Facebook exercise over these moderation facilities? Do they stay mainly hands-off?

Well, when I asked that question, which is a good and natural question, I got two answers. One, the third-party vendors — or as they call them the partners — direct the activity on the production floor, as it’s called. So you need to go to them if you’re going to seek out details of exactly how things are run. “But we hold them to the highest standards and we have detailed contracts that have all kinds of requirements in them!”

In Facebook’s case, 2019, they are supposedly doing independent audits of this activity. When I asked for the results of the audits, they said they weren’t prepared to share them. So they play it both ways. It’s primarily the responsibility of the third-party contractors, but they hold them to the highest standards. I don’t know what to make of that dichotomy, beyond the fact that it would be more straightforward if they wanted to supervise this activity in a direct and simplified way, they would bring more of it or all of it in-house.

Did you get the sense that their decisions about moderation were more motivated by making money or just not understanding the importance of moderation until it became a big public issue for them?

I think cost savings has been a major driving force behind the move to outsource this activity in the first place. In Facebook’s case, that was back circa 2009 to 2010 as the company’s growth was really taking off and the amount of moderation they had to do was just completely overwhelming. They had small in-house teams working on it, and rather than making the bold decision of “We’ve got to keep control over this, we’ve got to make sure quality is maintained, so we’re going to make this a function that we really deal with in-house, the same way we do with our engineering and product teams and then our marketing teams and so forth … [they didn’t]”

But I think there’s another factor sitting alongside cost, which makes it a somewhat more complicated proposition. And that is a psychological factor, that content moderation is just not seen as being one of the sort of elite aspects of Silicon Valley business culture. It’s not engineering. It’s not marketing. It’s not the devising of popular, new products. It’s this very nitty-gritty, at times debilitating activity that’s not, by the way, any kind of direct profit center … it’s a cost, not a revenue generator.

And for all those reasons, I think the people who run these companies can’t really see themselves anywhere close to this activity and are more comfortable holding it at a distance. And that’s very hard to pin down, and if you lay it out like that, people will say, “Oh, no, we understand how important it is.”

But I stand by that assessment that there’s just a difference, a qualitative difference in the kind of activity that content moderation requires as opposed to what these companies are generally eager to be involved in.

The problem is that content moderation is different from cooking up the nice lunches that are on offer at Facebook, or from providing security or janitorial services, which are activities that you really can say are not part of Facebook’s core competency. And it’s understandable that they bring in companies that specialize in those services. Doing content moderation the right way is part of the core business of Facebook.

I’m curious if you have any thoughts about whether the government should be involved in forcing changes on Facebook. Both Trump and Joe Biden have harped on Section 230 lately and want to get rid of Facebook’s protections via the law.

Personally, I think that that approach is … in Biden’s case I would describe it as kind of a gimmicky response to understandable unease with the size and influence of these several social media giants. Now in Trump’s case, I think it’s something else. I think it’s very direct retaliation and an effort to sort of shut the companies down or harm them in a much more retaliatory sense. But I don’t think that getting rid of Section 230 and having social media companies be liable for everything that users put on the sites makes a lot of sense. And I think it would quickly snuff out some of the good aspects of Facebook, the way people can use it to communicate, to express themselves in a very ready fashion. I mean, if you got rid of Section 230, you’d end up with a much, much smaller site where communication moves much more slowly because the site would have to be checking in a preemptive way almost everything that went up on the site.

Trump Twitter
Getty Images/Digital Trends Graphic

You recommend that Facebook bring moderators in as full employees and double the amount of moderators. Is there anything philosophically that Facebook should consider when it comes to moderation now changing that, maybe their views on what constitutes violent content or what not?

You’re putting your finger on an important distinction, which is the high-level policy decisions: How do we define hate speech, or do we mark the president’s latest post when he’s talking about voting practices or making seemingly incendiary remarks about shooting protesters? Those high-level policy decisions are and will be made by the senior-most people at the companies. Meanwhile, on a parallel track, the routine day in, day out activity of content moderation continues. So I think it’s important to draw that distinction and see that you need to continue to debate the big, big questions of the day. Do we ever slow down or comment on, or — in extreme cases, maybe remove comments by the president of the country — on one side, and on the other side is a set of issues that are less philosophical and more just operational. How do we treat people who are doing this work? Are they employees or do we treat them as outsiders who we deal with only through an intermediary and so forth?

Which platforms do you think have taken the best approach to moderation so far?

It’s hard to say. Historically — and I think still today, very recent events notwithstanding — Twitter has taken the most laissez-faire approach to moderation and the subset of moderation that is fact-checking. YouTube historically has had big problems with conspiracy theories and activity around conspiracy theories like PizzaGate and Qanon and so forth. They seem to have gotten a bit of religion on those subjects and are being more aggressive about trying to take down some of those types of things. Facebook has more formalized procedures for these things, has a much more systematic fact-checking operation, even though it’s still very much inadequate to the task. So Facebook has done the most, but it’s not as if they’ve solved the problem.

Is there anything you’d like to close on?

I think an important aspect of all this is how the marginalization of content moderation has had a particular effect in the developing world and has contributed to the difficulties that Facebook, in particular, has had with its platform being misused and that misuse leading to real-world violence in countries like Myanmar, Sri Lanka, India, Indonesia and so forth.

I think it’s important to connect those things to the inadequate attention that the company has paid to those countries historically. It’s linked to the inadequate attention that they’ve paid to content moderation. And I think if it were a function that was part of Facebook proper and seen that way — and the stature and status of the activity and the individuals were raised — that those kinds of problems would be much less common. Now, they have made some progress in that regard. They have added content moderators in some of those countries, but I think they still have a long way to go.

Editors' Recommendations

Will Nicol
Former Digital Trends Contributor
Will Nicol is a Senior Writer at Digital Trends. He covers a variety of subjects, particularly emerging technologies, movies…
Instagram’s expanded blocking lets you block a person’s backup accounts
Two mobile screenshots Instagram's expanded blocking feature.

Instagram has announced a number of new safety feature updates to its photo and video sharing app, including an expansion of its existing blocking feature.

On Thursday, Head of Instagram Adam Mosseri tweeted a video in which he introduced the new safety updates to IG. Of the three new "updated safety tools" Mosseri announced, the expansion of IG's blocking feature was particularly notable.

Read more
Is TikTok leaking drafts? Let’s take a closer look at this rumor
The TikTok app on a smartphone's screen. The smartphone is sitting on a white table.

Not every social media post is ready for prime time. Sometimes you write a post or film a video and decide that it's better to not publish it. That's fine. That's what the Drafts folder is for. That folder is built to hold your works-in-progress, mistakes, and other too-goofy-for-public-consumption posts and videos. The Drafts folder is probably one that you take for granted, but what if that folder (via a particularly viral-prone social media platform) were to have its content leaked and published for the world to see? Scary, isn't it?

That's the fear that's behind a certain, now years-long TikTok rumor going around. But is it true? Is TikTok leaking its users' drafts? In this guide, we're taking a closer look at this rumor and fact-checking it.
The rumor
As far as we can tell, the whole "TikTok leaks drafts" rumor dates back to at least the summer of 2020. It's not a rumor that really made mainstream news headlines, but it did get some coverage with lesser-known websites, and it does have a tendency to resurface repeatedly. The last time it resurfaced was in August 2022. Here's what we know about it:

Read more
Meta plans to bring Avatars to Reels and video chat
A Meta Connect 2022 screenshot showing Mark Zuckerberg avatar.

Meta has announced further plans to expand one of its VR features to its other social media and messaging apps.

On Tuesday, during the keynote of its Meta Connect 2022 event, the parent company of Facebook announced that it would be working on bringing its Horizon social VR avatars to Reels, WhatsApp, and Messenger.

Read more