Skip to main content

Facebook apologizes after report shows inconsistencies in removing hate speech

Facebook has taken plenty of criticism on the platform’s algorithms designed to keep content within the community guidelines, but a new round of investigative reporting suggests the company’s team of human review staff could see some improvements too. In a study of 900 posts, ProPublica reports that Facebook’s review staff was inconsistent about the posts containing hate speech, removing some but not others with similar content.

Facebook apologized for some of those posts, saying that in the 49 posts highlighted by the non-profit investigative organization, reviewers made the wrong choice on 22 of those posts. The social media platform defended 19 other instances, while eight were excluded because of incorrect flags, user deletions or a lack of information. The study was crowd-sourced, with Facebook users sharing the posts with the organization.

Recommended Videos

Justin Osofsky, Facebook’s vice president of Global Operations and Media Partnerships, said that the social media platform will be expanding review staff to 20,000 people next year. “We’re sorry for the mistakes we have made — they do not reflect the community we want to help build,” he said in response to the ProPublica investigation. “We must do better.”

ProPublica said Facebook is inconsistent on the treatment of hate speech, citing examples of two different statements that both essentially wished death on an entire group of people, with only one of them removed after being flagged. The second post was later removed after the ProPublica investigation.

“Based on this small fraction of Facebook posts, its content reviewers often make different calls on items with similar content, and don’t always abide by the company’s complex guidelines,” ProPublica said. “Even when they do follow the rules, racist or sexist language may survive scrutiny because it is not sufficiently derogatory or violent to meet Facebook’s definition of hate speech.”

On the flip side, the report also found posts that were redacted that shouldn’t have been. In one example, the image contained a swastika, but the caption was asking viewers to stand up against a hate group.

The study is far from the first time ProPublica, a non-profit investigative organization, has called out Facebook’s practices this year. This fall, Facebook changed its ad targeting after a study showed that when enough users typed in their own answers into the bio fields, racial slurs could become a category for a targeted ad. Just a week ago, ProPublica demonstrated that employers could discriminate by age using those ad tools. In the first, Facebook apologized and immediately paused the ad tool until the slip up could be fully corrected, while in the second, Facebook defended its practices.

Monitoring content from the largest social media network with more than 2 billion monthly active users isn’t an easy task and one that Facebook approaches with both artificial intelligence algorithms and human reviewers. Social media networks generally attempt to find a balance between banning hateful content and prohibiting free speech. Osofsky says the platform deletes 66,000 instances of hate speech every week.

The move to a review staff of 20,000 is fairly significant — when Facebook reported in May that it would be adding 3,000 more review staff members that brought the team to 7,500 people.

ProPublica says the investigation is important “because hate groups use the world’s largest social network to attract followers and organize demonstrations.”

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Bluesky finally adds a feature many had been waiting for
A blue sky with clouds.

Bluesky has been making a lot of progress in recent months by simplifying the process to sign up while at the same time rolling out a steady stream of new features.

As part of those continuing efforts, the social media app has just announced that users can now send direct messages (DMs).

Read more
Reddit just achieved something for the first time in its 20-year history
The Reddit logo.

Reddit’s on a roll. The social media platform has just turned a profit for the first time in its 20-year history, and now boasts a record 97.2 million daily active users, marking a year-over-year increase of 47%. A few times during the quarter, the figure topped 100 million, which Reddit CEO and co-founder Steve Huffman said in a letter to shareholders had been a “long-standing milestone” for the site.

The company, which went public in March, announced the news in its third-quarter earnings results on Tuesday.

Read more
Worried about the TikTok ban? This is how it might look on your phone
TikTok splash screen on an Android phone.

The US Supreme Court has decided to uphold a law that would see TikTok banned in the country on January 19. Now, the platform has issued an official statement, confirming that it will indeed shut down unless it gets some emergency relief from the outgoing president.

“Unless the Biden Administration immediately provides a definitive statement to satisfy the most critical service providers assuring non-enforcement, unfortunately TikTok will be forced to go dark on January 19,” said the company soon after the court’s verdict.
So, what does going dark mean?
So, far, there is no official statement on what exactly TikTok means by “going dark.” There is a lot of speculation out there on how exactly the app or website will look once TikTok shutters in the US.

Read more