Skip to main content

Facebook removed 583 million fake accounts in the first quarter of 2018

It’s not just third-party apps getting the ax from Facebook — it’s fake accounts, too. On Tuesday, May 15, Facebook published its first ever Community Standards Enforcement Report in a continuing effort to restore public faith in the social network as it continues to combat fake news and privacy scandals.

And as it turns out, when it comes to fighting the fake, there’s a lot to contend with. In fact, the company’s vice president of product management, Guy Rosen, revealed that Facebook disabled around 583 million fake accounts in the first three months of 2018 alone. For context, that’s about a quarter of the social network’s entire user base.

On average, around 6.5 million fake accounts were created every day between the beginning of 2018 and March 31. Luckily, Rosen notes that the majority of these spam accounts were disabled within just minutes of registration. This is largely thanks to Facebook’s artificial intelligence tools, which relieve humans of the burden of combing through the site to find the bots. That said, while A.I. is obviously useful, it’s not entirely foolproof. Facebook still estimates that between 3 and 4 percent of Facebook accounts are not real. That means that with 2.2 billion users, around 66 million of those accounts are fake.

Moreover, Facebook managed to find and delete 837 million spam posts in the first quarter of 2018, the vast majority of which were deleted before users got the chance to report them. “The key to fighting spam is taking down the fake accounts that spread it,” Rosen noted. And this, of course, is an ongoing effort within Facebook, who also recently revealed that it’s blocking the access of more than 200 third-party apps found to be in violation of data policies.

While Facebook has been quite effective at taking down instances of adult nudity and sexual activity, as well as graphic violence, the team admits that its technology “still doesn’t work that well” when it comes to hate speech. While Facebook ultimately removed 2.5 million pieces of hate speech in the first three months of the year, only 38 percent was flagged by A.I. tools.

“As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse,” Rosen noted. “It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important.”

That said, Facebook says that it is “investing heavily in more people and better technology to make Facebook safer for everyone,” and is also dedicated to transparency. “We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly, too,” Rosen concluded. “This is the same data we use to measure our progress internally — and you can now see it to judge our progress for yourselves. We look forward to your feedback.”

Editors' Recommendations

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
Facebook admits it didn’t actually remove Kenosha militia event
mark zuckerberg shocked

Facebook reversed its statement that it took action to take down a now-infamous Kenosha militia event that was shared shortly before a deadly shooting during protests in Wisconsin. 

An event posted by the Kenosha Guard Facebook group was taken down by the page’s moderators and not Facebook, according to a Buzzfeed report that cited internal documents from Facebook.

Read more
Facebook removes network of Russian misinformation groups
The Facebook home page on a screen.

Facebook announced that it has taken down three networks of pages and groups that demonstrated “coordinated inauthentic behavior," including one from Russia that was aimed at recruiting journalists to write for news pages that looked left-leaning, according to the Washington Post.

In total, Facebook said it took down 521 Facebook accounts, 147 pages, and 78 groups associated with these networks, but the Russian network was actually the smallest of the three rings that were busted. The three groups generated fake views and engagements that intentionally sought to mislead people, according to Facebook.

Read more
Facebook removes nearly 800 QAnon-related groups, pages, hashtags, and ads
QAnon conspiracy theorist holds a sign

Facebook took down nearly 800 groups associated with the far-right conspiracy theory group QAnon on Wednesday, as well as more than 1,500 advertisements and 100 pages tied to the group in a move to restrict "violent acts."

In a blog post, Facebook said the action is part of a broader "Dangerous Individuals and Organizations" policy measure to remove and restrict content that has led to real-world violence. The policy will also impact militia groups and political protest organizations like Antifa.

Read more