Skip to main content

Social (Net)Work: Fake news spreads faster than truth, but bots aren’t to blame

The spread of rumors online: How to know what's true or false?

Criticism for hate speech, extremism, fake news and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore what social platforms are doing, what works, what doesn’t, and possibilities for improvement. 

Sequestered in the dormitory as the manhunt for the second suspect in the Boston Marathon bombing locked down the entire city, MIT student Soroush Vosoughi turned to the fastest source of news he knew: Social media. While social networks spread eye witness accounts and real-time updates, the platforms also perpetuated rumors of a third bomb — and a third suspect. But what Vosoughi didn’t know at the time was that those rumors were 70 percent more likely to get a retweet than the actual truth.

MIT conducted what is the most comprehensive study on Twitter yet: 126,000 stories, Tweeted 4.5 million times from more than three million users from 2006 to 2017.

Fast forward five years, and Vosoughi, now a postdoctoral associate, is the co-author of a study out of MIT’s Media Lab that discovered that false news not only spreads faster, farther and deeper than the real thing, but that the reason for the wider spread isn’t bots. So what’s the cause?

Probably human nature, the researchers suggest. Working with Deb Roy and Sinan Aral, Vosoughi says the team conducted what is the most comprehensive study on Twitter yet, in both the time frame and the number of Tweets included. The study, published in the March 9 Issue of Science, covers a decade of Tweets, from Twitter’s launch in 2006 to 2017.

False news spreads farther and faster than the real thing

The study’s authors spent a year and a half using Twitter archives to look at around 126,000 stories, Tweeted 4.5 million times from more than three million users.

The study's authors Soroush Vosoughi, Sinan Aral, and Deb Roy.
Pictured (left to right): Seated, Soroush Vosoughi, a postdoc at the Media Lab’s Laboratory for Social Machines; Sinan Aral, the David Austin Professor of Management at MIT Sloan; and Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab, who also served as Twitter’s Chief Media Scientist from 2013 to 2017. MIT

While earlier studies researched methods for diffusing rumors, the MIT study compared the spread of verified stories with rumors. (The group choose to leave the term “fake news” out of the academic study because of the political connotations the term has picked up.)

The group conducted the study using 126,000 stories that had been checked by six independent organizations, such as Snopes and FactCheck.org, eliminating from the data any stories where the fact checkers did not agree between 95 and 98 percent. With each story labeled as true, false, or mixed, the group then analyzed how each category spread.

True stories took six times longer to spread to 1,500 people compared to false ones. The most wide-spread false Tweets reached between 1,000 and 100,000 users.

Out of the stories verified as true, few Tweets reached more than 1,000 people. Yet, the most wide-spread false Tweets reached between 1,000 and 100,000 users. Those false Tweets also took on a viral form, branching out with new Tweets rather than just spreading from one broadcast. True stories also took six times longer to spread to 1,500 people compared to false ones. The group also looked at the depth of that spread, or the branches of unique user retweets, and found that true stories took ten times longer to spread nearly half as deep.

False political news was more viral than any of the other topics examined by the study. The research suggests that false political Tweets exceeded a reach of 20,000 people three times faster than a true story could reach half that number, regardless of category. The categories for urban legends and science joined politics with the fastest and farthest spread. False news on politics and urban legends were the most viral.

Controlling for bots and influencers

After gathering the data, the group implemented several strategies in order to determine if variables such as bots and the number of followers influenced the data. The researchers took each account and ran it through a bot detection algorithm, then removed any account with a more than 50 percent chance of being a bot from the data. But even with the bots eliminated, the group said the conclusion on the faster, wider, deeper spread of false news still stood.

Graphic: The green shows the spread of true news and the red fake news.
Sample graphic of a true and false cascade, the green shows the spread of true news and the red fake news. Credit: Peter Beshai Image used with permission by copyright holder

But what about the number of followers? Factoring in the number of followers actually showed researchers that users that tweeted or retweeted false news were actually more likely to have fewer followers, not more. After controlling for the number of followers, the age of the account and the user’s level of activity — along with the blue verification badge that has recently come under fire — the group concluded that the false stories were still 70 percent more likely to go viral than true stories.

The group also worked to see if the fact-checking organizations used in the study had any biases that affected the results. It asked real people to fact check a smaller percentage of the data that hadn’t been verified by the organizations on a group of Tweets from 2016.

Comparing these manually-checked stories with the ones checked by an organization, the researchers said the results were nearly identical. (But hats off to the undergraduates who were tasked with going through three million Tweets).

So why does false news spread so fast?

The research didn’t stop at just the statistics. Based on an earlier theory that humans prefer novel information, the researchers looked at some 5,000 Twitter users who had retweeted either true or false rumors. They analyzed 60 days of history of the tweets those users had been exposed to prior to retweeting a rumor, and found that the false rumors, compared to rumors that proved to be true, tended to be much more different from the tweets a user was previously exposed to, suggesting a higher degree of novelty for false news.

Still, the study revealed that false news tends to have qualities that past research has shown may increase its appeal.

The group then looked to see if specific emotions were tied to a false news story more than a true one. Without access to the Facebook-style emoji reactions, the group ran a program that compared the words in the comments to a known list of that word’s associated emotion. The comments on the false Tweets had a greater number of words associated with surprise and disgust.

The true Tweets, meanwhile, often had comments that contained words associated with sadness, anticipation, and trust.

While the researchers suggest emotion and novelty actually may be causes for the difference in the spread of false news versus real news, they did not definitively make that conclusion. Still, the study revealed that false news tends to have qualities that past research has shown may increase its appeal.

What can platforms do to stop the spread of false news?

Since human nature appears to be one of the reasons why the false data spreads more, Vosoughi suggests the first solution should be people-based rather than dependent on the social media companies themselves. Educating social media users and students how to spot the fakes could help online viewers sort out the overwhelming mass of information online.

The spread of rumors online: How might this be used in future work?

While the post doctoral associate says that stopping the spread of false news is ultimately up to the user, social media companies could help by providing more information that the reader could use to judge the accuracy of the information. “In the same way that, when you go to a restaurant when you go to order a food, you see calorie content of the food you are ordering so that you can make a better choice, I think social media platforms could provide some kind of score on the quality of what you are reading,” Vosoughi said. “I don’t think they should censor anyone, but by providing quality scores, people could make better decisions before sharing.”

Vosoughi said he will continue researching the spread of false news by running tests on possible solutions in order to determine if giving users a nutrition-facts-like label impacts sharing behavior.

“When you were reading these things, you didn’t know if they were true or false. You couldn’t know what to believe and what not to believe.”

The study wasn’t the only research sparked by Vosoughi’s social media experience during the aftermath of the Boston Marathon bombings. For his PHD thesis, he developed a false news detection algorithm that, he says, wasn’t 100 percent accurate but helped cut back on some of the noise by detecting some of the fakes. The algorithm was finished in 2015 and he is currently talking with some groups interested in using it, including emergency services.

“When you were reading these things,” he said, while recalling using Facebook, Twitter and Reddit for news during the campus lockdown after the bombings, “you didn’t know if they were true or false. You couldn’t know what to believe and what not to believe. It was the first time that I experienced the effects that false news and rumors can have on you. If you are living in that moment, in that town, false news will change the story even more. That was a wakeup call for me.”

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
X rival Threads could be about to get millions of more users
Instagram Threads app.

Threads -- Meta’s rival to X, formerly Twitter -- has just launched in the European Union (EU), a market with nearly half a billion people.

The app launched in the U.S. to much fanfare in July, with Meta hoping to attract X users disillusioned with the turbulence on the platform since Elon Musk acquired it for $44 billion 14 months ago.

Read more
X (formerly Twitter) returns after global outage
A white X on a black background, which could be Twitter's new logo.

X, formerly known as Twitter, went down for about 90 minutes for users worldwide early on Thursday ET.

Anyone opening the social media app across all platforms was met with a blank timeline. On desktop, users saw a message that simply read, "Welcome to X," while on mobile the app showed suggestions for accounts to follow.

Read more
How to create multiple profiles on a Facebook account
A series of social media app icons on a colorful smartphone screen.

Facebook (and, by extension, Meta) are particular in the way that they allow users to create accounts and interact with their platform. Being the opposite of the typical anonymous service, Facebook sticks to the rule of one account per one person. However, Facebook allows its users to create multiple profiles that are all linked to one main Facebook account.

In much the same way as Japanese philosophy tells us we have three faces — one to show the world, one to show family, and one to show no one but ourselves — these profiles allow us to put a different 'face' out to different aspects or hobbies. One profile can keep tabs on your friends, while another goes hardcore into networking and selling tech on Facebook Marketplace.

Read more