Skip to main content

You’re probably seeing more social media propaganda, but don’t blame the bots

Bots commonly shoulder the blame for social media propaganda, but a recent study out of the U.K. suggests not only that organized political misinformation campaigns have more than doubled in the last two years, but that bots take second place to human-run manipulation.

The Global Disinformation Order study, conducted by the University of Oxford, found evidence of social media manipulation by a government agency or political party in 70 countries, an increase from 48 in 2018 and 28 in 2017. The study has been collecting data annually since 2017, but suggests political propaganda has leveraged social media for the last decade.

The study, co-authored by Samantha Bradshaw and Phillip N. Howard, tallies up reports from around the world on cyber troops, defined as “government or political party actors tasked with manipulating public opinion online.” While the report focuses on propaganda that can be traced back to a government agency, politician, or political parties, the researchers found formal coordination with private communication firms, and in more than 40% of the countries, civic organizations and citizens. 

Much of propaganda is created by actual people: 87% of the countries use human accounts compared to the 80% of countries using bots. In some cases, the study even identified countries hiring student or youth groups for computational propaganda, including Russia and Israel.

The increase in countries with organized misinformation is likely partially an increase in activity but is also inflated by the increasing ability to detect such activity. “The number of cases we identified was the most surprising thing about this year’s study. Partially, the growth has to do with more state actors seeing social media as a tool of geopolitical power,” Bradshaw, study co-author and researcher at the Computational Propaganda Project, told Digital Trends. “But not all of the cases were new, per se. Many were older examples that were uncovered by journalists and other independent researchers, who are now equipped with better tools and a better vocabulary for identifying instances of computational propaganda in their own country context.”

This year, the researchers also identified a new category of accounts used for manipulation — in addition to human accounts, bot accounts, and “cyborg” accounts that use both, 7% of the countries hacked or stole real accounts to use in their campaigns. Guatemala, Iran, North Korea, Russia, and Uzbekistan were among the countries using hacked or stolen accounts.

More than half of the countries with evidence of political propaganda — 45 out of 70 — used the tactics during the elections. Among those examples, the study suggests, are politicians with fake followers, targeted ads using manipulated media, and micro-targeting.

So what type of information are the campaigns using? Attacking political opposition was the most widespread, in 89% of the countries, followed by spreading pro-government or pro-party propaganda and 34% spreading information designed to create division.

While nearly 75% used tactics like memes, fake news, and videos, the tactics also fell under more covert types of manipulation beyond the media that’s shared. About 68% used state-sponsored trolls to attack opponents, such as journalists and activists. Many also used the reporting tools to censor speech, hoping the automated process will remove the content that doesn’t violate any platform rules. Another 73% percent of the countries flood hashtags in order to make a message more widespread.

Most of the cyber troop activity remains on the biggest social network, Facebook, but the researchers saw an increase in campaigns on platforms focused on photos and video, including Instagram and YouTube. The researchers also saw increased activity on WhatsApp.

The United States ranked among the “high cyber troop capacity” group, which indicates a full-time operation with a big budget focusing on both domestic and foreign propaganda. The report suggests the U.S. uses disinformation, data, and artificial amplification of content from human, bot, and cyborg (or mixed human-bot) accounts. The study also showed evidence the U.S. used all five messaging categories included in the study: Support, attack the opposition, distract, driving divisions, and suppression.

Bradshaw says that social media companies should do more to create a better place to connect and discuss politics. “Determining whether a post is part of a manipulation campaign is no easy task. It often requires looking at broad trends across social media and the conversation that is taking place about a particular topic,” she said.

While Bradshaw says detecting misinformation shouldn’t be left solely to the user, some misinformation can be picked up by looking for accounts that post in multiple languages, conducting reverse image searches, and using free online tools to detect automated accounts. 

The 2019 study highlights changes in political propaganda that existed long before the internet, but has likely been leveraging social media for a decade. The study authors end the report with a question:“Are social media platforms really creating a space for public deliberation and democracy? Or are they amplifying content that keeps citizens addicted, disinformed, and angry?”

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
WhatsApp now lets you add short video messages to chats
WhatsApp logo on a phone.

You can now send short video messages in a WhatsApp chat, Meta announced on Thursday.

A video message can last for up to 60 seconds long and is protected with end-to-end encryption.

Read more
Musk shows off new X sign on top of San Francisco HQ, but the city’s not happy
The new X sign replacing the Twitter logo on the company's headquarters in San Francisco.

Soon after Elon Musk tweeted a drone video showing a new white light in the shape of an X atop the company’s headquarters in San Francisco on Friday, the Associated Press (AP) reported that the city had decided to launch in investigation over concerns that the sign's installation may have broken rules.

The X logo is replacing the iconic Twitter bird as Musk continues efforts to rebrand the social media platform that he acquired in October.

Read more
Threads has lost half its users, according to Meta chief Zuckerberg
Instagram Threads app.

Meta’s Threads app looks set for an uphill climb if it’s ever to take the microblogging crown from Twitter, which is currently being rebranded as X.

Meta CEO Mark Zuckerberg recently told employees that despite its impressive start in early July when around 100 million people activated a Threads account in its first five days of availability, more than half of those users have stopped checking in.

Read more