Home > Social Media > Influential or insignificant? A primer on…

Influential or insignificant? A primer on Facebook’s fake news dilemma

Facebook patent reveals automated removal process for objectionable content

Facebook is on a precipice. Its critics claim fake news on its site swayed the presidential election, but any changes it has attempted to make to its News Feed have seen it branded as a “media company” — a label it has repeatedly rejected.

So where does Facebook go from here? In the past week alone, it has made strategic moves to track fabricated content on its site and to stop third parties from profiting from it. Meanwhile, its management has played down reports of the extent of its fake news crisis, while also committing to make content on its platform more diverse. These are positive signs, but its detractors claim it still isn’t enough.

Read on to find out how Facebook’s fake news problem came to be and how it has called in to question the inner workings of the world’s biggest social network.

Facebook patenting automated content removal tool

Facebook CEO Mark Zuckerberg’s most recent statement on his platform’s fake news problem made it clear that new tools were being developed to tackle the issue. Now, we may have a better idea of what those updates could look like.

A patent application filed by Facebook in June 2015 just came to light. The application details an AI-assisted tool to improve the detection of “objectionable content,” such as pornography, hate speech, and harassment, notes The Verge. Seeing as Zuckerberg mentioned last month that the best way to tackle misleading content was to develop “better technical systems to detect what people will flag as false before they do it themselves,” the automated system detailed in the application could be the solution he was referring to.

Curently Facebook relies on its users to report content in violation of its community standards. The flagged items are then reviewed by its moderators, who decide upon the action to take (for example, removing the material, or suspending the account responsible for posting it). The new system basically takes the existing method and aims to streamline it with machine learning.

The method designates the reported item a score based on a set of criteria, including the reliability of the person reporting it, whether it came from a verified profile, the amount of users that reported it as objectionable, and the age of the account that posted it. The content that receives a higher rating is dealt with quicker. Essentially, the machine learning tool would learn from these signals to one day be able to make the decisions itself, making the reveiews process even faster.

In an emailed statement, a Facebook spokesperson refused to comment on whether the technology will ever be implemented on the social network.

The main question is whether an automated content removal system can be applied to the type of hyper-partisan articles that are currently drawing criticism? Simply removing content (as this new system does) could, once again, raise questions about Facebook’s objectionability. And that’s the last thing the company will want.

Is this Facebook survey part of the platform’s move to stamp out fake news?

This week, a number of Facebook users began sharing screenshots of a survey that was appearing beneath select news articles on the site asking them to rate the language used in the link. Could this be part of Facebook’s grand strategy (see “Zuckerberg on the defensive” below) to fix fake news on its platform?

The survey asks: “To what extent do you think that this link’s title uses misleading language?” It then gives users several options, including “not at all,” “slightly,” “somewhat,” “very much,” and “completely.”

Facebook CEO Mark Zuckerberg stated last month that his company would reach out to its platform’s users for their feedback on the fake news issue. Thefore, the survey could be part of a data collection method that could help Facebook build future News Feed algorithm updates that could then spot untrostworthy links automatically.

Some other users pointed out the term “misleading” could broadly be applied to click-bait as well. After all, Facebook did promise to clampdown on so-called deceptive or exagerrated articles that exist to attract webiste hits. However, upon announcing its war on click-bait, the platform claimed it was implementing the algorithm to target the articles in question. Thus, the new survey cannot simply be meant to help identify clik-bait, seeing as the site already knows how to do that. Chances are Facebook is lumping a number of article types together: spam, click-bait, bogus, and maybe even hyper-partisan, which could all fall under the banner of unreliable news.

One things for sure: Facebook knows its News Feed needs cleaning, and it is recruiting its users to help it do the job.

1 of 6