Influential or insignificant? A primer on Facebook’s fake news dilemma

Facebook patent reveals automated removal process for objectionable content

facebook fake news roundup zuckerberg facebookf8 0001
Facebook is on a precipice. Its critics claim fake news on its site swayed the presidential election, but any changes it has attempted to make to its News Feed have seen it branded as a “media company” — a label it has repeatedly rejected.

So where does Facebook go from here? In the past week alone, it has made strategic moves to track fabricated content on its site and to stop third parties from profiting from it. Meanwhile, its management has played down reports of the extent of its fake news crisis, while also committing to make content on its platform more diverse. These are positive signs, but its detractors claim it still isn’t enough.

Read on to find out how Facebook’s fake news problem came to be and how it has called in to question the inner workings of the world’s biggest social network.

Facebook patenting automated content removal tool

Facebook CEO Mark Zuckerberg’s most recent statement on his platform’s fake news problem made it clear that new tools were being developed to tackle the issue. Now, we may have a better idea of what those updates could look like.

A patent application filed by Facebook in June 2015 just came to light. The application details an AI-assisted tool to improve the detection of “objectionable content,” such as pornography, hate speech, and harassment, notes The Verge. Seeing as Zuckerberg mentioned last month that the best way to tackle misleading content was to develop “better technical systems to detect what people will flag as false before they do it themselves,” the automated system detailed in the application could be the solution he was referring to.

Curently Facebook relies on its users to report content in violation of its community standards. The flagged items are then reviewed by its moderators, who decide upon the action to take (for example, removing the material, or suspending the account responsible for posting it). The new system basically takes the existing method and aims to streamline it with machine learning.

The method designates the reported item a score based on a set of criteria, including the reliability of the person reporting it, whether it came from a verified profile, the amount of users that reported it as objectionable, and the age of the account that posted it. The content that receives a higher rating is dealt with quicker. Essentially, the machine learning tool would learn from these signals to one day be able to make the decisions itself, making the reveiews process even faster.

In an emailed statement, a Facebook spokesperson refused to comment on whether the technology will ever be implemented on the social network.

The main question is whether an automated content removal system can be applied to the type of hyper-partisan articles that are currently drawing criticism? Simply removing content (as this new system does) could, once again, raise questions about Facebook’s objectionability. And that’s the last thing the company will want.

Is this Facebook survey part of the platform’s move to stamp out fake news?

This week, a number of Facebook users began sharing screenshots of a survey that was appearing beneath select news articles on the site asking them to rate the language used in the link. Could this be part of Facebook’s grand strategy (see “Zuckerberg on the defensive” below) to fix fake news on its platform?

The survey asks: “To what extent do you think that this link’s title uses misleading language?” It then gives users several options, including “not at all,” “slightly,” “somewhat,” “very much,” and “completely.”

Facebook is asking whether this @PhillyInquirer headline is fake? pic.twitter.com/cCUpwtvQlS

— Chris Krewson (@ckrewson) December 5, 2016

Facebook CEO Mark Zuckerberg stated last month that his company would reach out to its platform’s users for their feedback on the fake news issue. Thefore, the survey could be part of a data collection method that could help Facebook build future News Feed algorithm updates that could then spot untrostworthy links automatically.

Some other users pointed out the term “misleading” could broadly be applied to click-bait as well. After all, Facebook did promise to clampdown on so-called deceptive or exagerrated articles that exist to attract webiste hits. However, upon announcing its war on click-bait, the platform claimed it was implementing the algorithm to target the articles in question. Thus, the new survey cannot simply be meant to help identify clik-bait, seeing as the site already knows how to do that. Chances are Facebook is lumping a number of article types together: spam, click-bait, bogus, and maybe even hyper-partisan, which could all fall under the banner of unreliable news.

One things for sure: Facebook knows its News Feed needs cleaning, and it is recruiting its users to help it do the job.

Facebook’s AI chief: Fake news can be fixed using tech

Yann LeCun, Facebook’s head of AI research, claims the company can easily build the tech to target fake news, but its real issue is how to implement the software.

Speaking to reporters, LeCun said: “The technology either exists or can be developed. But then the question is how does it make sense to deploy it?”

Without commenting on whether AI would be part of the system, LeCun added that the product’s implementation was not his department, Recode reports. “They’re more like trade-offs that I’m not particularly well placed to determine,” said LeCun. “Like, what is the trade off between filtering and censorship, and free expression and decency, and all that stuff, right?”

His words echo the statements made by Mark Zuckerberg, in which the Facebook CEO repeatedly stressed that his company was erring on the side of caution in terms of attempting to define what is considered to be the “truth.”

Fictional stories and where to find them

news feed

To understand the company’s dilemma, it is best to start from its core social sharing tool: the Facebook News Feed.

Both internally within Facebook, and externally among the media, there is much talk (and disagreement) over the so-called influence of the News Feed, now used by 1.75 billion people. One thing’s for sure, any real steps to stamp out fake news will start and end with the largely automated timeline.

At present, the News Feed prioritizes content it thinks you’ll like based on your activity – such as posts you’ve interacted with through likes, comments, and shares. The social network’s critics claim this system creates a filter bubble (or echo chamber) that only functions to surface content that corresponds to your views and opinions.

With that in mind, fake news can wield a negative impact – especially in light of recent findings by the Pew Research Center that revealed almost half of American adults on Facebook get their news from the platform. But fabricated content comes in all shapes and sizes. There’s spam content posted on deceptive sites purely to generate ad revenue; erroneous articles rushed to print online that are later retracted or amended; and hyperpartisan (or bogus) news items that exist simply to put forth skewed political viewpoints. In our post-election landscape, it is the latter that are causing the most concern as to their detrimental effects on social media users.

Even before the election outcome was decided, a number of reports had measured the extent of Facebook’s fake news problem. Chief among them was John Herrman’s piece in The New York Times detailing the rise of these types of outlets on the platform. “Facebook-native political pages have begun to create and refine a new approach to political news: cherrypicking and reconstituting the most effective tactics and tropes from activism, advocacy, and journalism into a potent new mixture,” Herrman wrote.

Two months later, BuzzFeed published its own report regarding fake political news on Facebook in which it claimed that a hub of pro-Trump sites were being operated far away from the U.S. in the former Yugoslav republic of Macedonia. These sites were spamming Facebook groups with their “baseless” content with no other agenda than to further their own financial gain by raking in ad revenue.

This Facebook trending story is 100% made up.
Nothing in it is true.
This post of it alone has 10k shares in the last six hours. pic.twitter.com/UpgNtMo3xZ

— Ben Collins (@oneunderscore__) November 14, 2016

Following election night, an influx of articles on Facebook’s fake news problem articulated the need to hold the company accountable for its so-called part in swaying the result. How did Facebook respond? Over to you, Zuck.

Zuckerberg on the defensive

It didn’t take long for Zuckerberg to leap to his company’s defense. During the course of the past few weeks, the Facebook founder has commented on the issue three times in total.

Earlier this month he made some valid points that hinted at the mainstream media’s disconnect from U.S. voters. Speaking at a tech conference, Zuckerberg made his first public statement on the issue, in which he dismissed the notion that fake news on Facebook swayed the election as “crazy.” He added: “There is a certain profound lack of empathy in asserting that someone voted the way they did … because they saw some fake news. If you believe that then I don’t think you have internalized the message Trump supporters in this election are trying to send.”

Moreover, in his second statement shared via his Facebook account, Zuckerberg insisted that “more than 99 percent of what people see” on the platform is authentic.

“Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics,” Zuckerberg said. “Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.”

He concluded by stating that Facebook would continue its research into the issue, but warned the company had to remain cautious when implementing any changes. “I believe we must be extremely cautious about becoming arbiters of truth ourselves,” Zuckerberg said.

In the past, these types of changes have come in reaction to a backlash (as in October when Facebook promised to surface more “newsworthy” content, following an outcry over its removal of a prominent Vietnam War photograph). And that also seems to be the case now.

On November 18 came the revelation that Facebook is actually planning to implement practical changes. Again, it was Zuckerberg that made the announcement outlining the proposed updates.

Chief among the new features is what sounds like an algorithmic update to the News Feed. Using its machine learning tech, the system will, according to the CEO, be able to predict fake news content based on data trends and label it as bogus. Zuckerberg also mentioned a warning system that will alert users who read or attempt to share fake news as to its unreliability. He claimed that Facebook was working on these tools with third-party fact checking sites, journalists, and its community of users.

The fact that Zuckerberg had to address the company’s critics several times, and in quick succession, betrays how deeply concerned Facebook is about the matter, which continues to dominate headlines. And now, reports are beginning to surface that those concerns run deep. Indecision on the part of Facebook has reportedly led to disagreements between management and staff members.

Internal discord 

facebookComp_head

The big blue social network should have come out victorious after the most recent election season. Last month, it was touting its positive social impact on voter registration. In an otherwise fraught political environment, Facebook managed to brush off earlier accusations of liberal bias by assisting both campaigns with their respective Facebook Live video feeds. Then, just a week before the country went to the polls, it announced another strong quarter of earnings. On the surface, everything was going swimmingly for the social network.

Internally, however, Facebook is reportedly struggling to deal with the latest round of criticism. The company has allegedly created algorithmic updates to its News Feed to specifically target fake news but it fears implementing the changes could once again lead to accusations of bias.

The problem, according to anonymous sources close to the matter who spoke to Gizmodo, is that the sites in question are “disproportionately” right-wing in terms of their editorial content. Facebook fears that by targeting the content produced by these Pages (some of which have millions of likes, and boast high levels of engagement) it could upset conservatives. Its inability to address the issue could be the result of an earlier controversy regarding its Trending Topics feed.

In May, Facebook found itself on the opposing end of a similar backlash as to the one it faces today. Following revelations by ex-employees that their colleagues had been suppressing conservative news on the trending topics section, Facebook was forced to issue statement after statement denying managerial involvement. It even launched an internal investigation into the matter, which found no evidence of systemic bias.

The trending topics incident “paralyzed” Facebook, rendering it unable to clamp down on fake news in fear of raising doubts over its impartiality, according to employees who spoke to The New York Times.

For its part, Facebook claimed it “did not build and withhold any News Feed changes based on their potential impact on any one political party.” It continued: “We always work to make News Feed more meaningful and informative … This includes continuously reviewing updates to make sure we are not exhibiting unconscious bias.”

It seems so-called “renegade” Facebook employees are again taking matters into their own hands. “Fake news ran wild on our platform during the entire campaign season,” a Facebook worker told BuzzFeed. Alongside a dozen or so other staff members, the unidentified individual is allegedly part of a task force that is taking it upon themselves to create measures to battle fake news. The group plans to eventually share its recommendations with management. Other employees are quoted as saying that “hundreds” of workers are dissatisfied with Facebook’s stance on fake news.

Yet, even in the midst of these reports of internal strife, those paying attention to Facebook will have noticed that short-term changes are already being implemented.

Adverts speak louder than words

Earlier in November, Facebook banned operators of fake news sites from utilizing its ad network to generate income. Despite being viewed as a small step, the move hits fake news vendors where it hurts: their wallets.

“We do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news,” Facebook said in a statement following its update to its ad policy.

Aside from trying to limit the amount of money fake news outlets can make from its site, Facebook also took another step that could be related to a newfound strategy aimed at flagging fake content. Shortly before its clampdown on ads attached to fake news sites, the company quietly purchased a popular social analytics tool used by media companies to track trending stories. CrowdTangle announced its takeover on its website – Facebook made no official announcement. Why is the company’s latest acquisition important? Well, it just so happens to be the same tool reporters have used to track the rise of fake news on Facebook.

I used Crowdtangle to report on spread of fake news on Facebook. Now that Facebook has bought Crowdtangle, I worry that use will be gone

— Alex Kantrowitz (@Kantrowitz) November 11, 2016

Obama cares

Zuckerberg hangs with people like President Obama
Mark Zuckerberg / Facebook

Just as Facebook seems to be getting its act together, the issue of fake news is being discussed in the global political arena. President Barack Obama made his strongest remarks on the subject, claiming it is damaging to the U.S. democratic process.

“If we are not serious about facts and what’s true and what’s not, if we can’t discriminate between serious arguments and propaganda, then we have problems,” he said during a press conference in Germany.

“In an age where there’s so much active misinformation and its packaged very well and it looks the same when you see it on a Facebook page or you turn on your television,” Obama said. “If everything seems to be the same and no distinctions are made, then we won’t know what to protect.”

Obama – who, during his presidency, has participated in two Q&A sessions with Zuckerberg – previously referred to the conspiracy theories being floated around on Facebook as creating a “dust cloud of nonsense.”

Updated on 12-07-2016 by Saqib Shah: Added news of Facebook patent application detailing an automated content removal system.

Editors' Recommendations