Skip to main content

You’re probably seeing more social media propaganda, but don’t blame the bots

Bots commonly shoulder the blame for social media propaganda, but a recent study out of the U.K. suggests not only that organized political misinformation campaigns have more than doubled in the last two years, but that bots take second place to human-run manipulation.

The Global Disinformation Order study, conducted by the University of Oxford, found evidence of social media manipulation by a government agency or political party in 70 countries, an increase from 48 in 2018 and 28 in 2017. The study has been collecting data annually since 2017, but suggests political propaganda has leveraged social media for the last decade.

The study, co-authored by Samantha Bradshaw and Phillip N. Howard, tallies up reports from around the world on cyber troops, defined as “government or political party actors tasked with manipulating public opinion online.” While the report focuses on propaganda that can be traced back to a government agency, politician, or political parties, the researchers found formal coordination with private communication firms, and in more than 40% of the countries, civic organizations and citizens. 

Much of propaganda is created by actual people: 87% of the countries use human accounts compared to the 80% of countries using bots. In some cases, the study even identified countries hiring student or youth groups for computational propaganda, including Russia and Israel.

The increase in countries with organized misinformation is likely partially an increase in activity but is also inflated by the increasing ability to detect such activity. “The number of cases we identified was the most surprising thing about this year’s study. Partially, the growth has to do with more state actors seeing social media as a tool of geopolitical power,” Bradshaw, study co-author and researcher at the Computational Propaganda Project, told Digital Trends. “But not all of the cases were new, per se. Many were older examples that were uncovered by journalists and other independent researchers, who are now equipped with better tools and a better vocabulary for identifying instances of computational propaganda in their own country context.”

This year, the researchers also identified a new category of accounts used for manipulation — in addition to human accounts, bot accounts, and “cyborg” accounts that use both, 7% of the countries hacked or stole real accounts to use in their campaigns. Guatemala, Iran, North Korea, Russia, and Uzbekistan were among the countries using hacked or stolen accounts.

More than half of the countries with evidence of political propaganda — 45 out of 70 — used the tactics during the elections. Among those examples, the study suggests, are politicians with fake followers, targeted ads using manipulated media, and micro-targeting.

So what type of information are the campaigns using? Attacking political opposition was the most widespread, in 89% of the countries, followed by spreading pro-government or pro-party propaganda and 34% spreading information designed to create division.

While nearly 75% used tactics like memes, fake news, and videos, the tactics also fell under more covert types of manipulation beyond the media that’s shared. About 68% used state-sponsored trolls to attack opponents, such as journalists and activists. Many also used the reporting tools to censor speech, hoping the automated process will remove the content that doesn’t violate any platform rules. Another 73% percent of the countries flood hashtags in order to make a message more widespread.

Most of the cyber troop activity remains on the biggest social network, Facebook, but the researchers saw an increase in campaigns on platforms focused on photos and video, including Instagram and YouTube. The researchers also saw increased activity on WhatsApp.

The United States ranked among the “high cyber troop capacity” group, which indicates a full-time operation with a big budget focusing on both domestic and foreign propaganda. The report suggests the U.S. uses disinformation, data, and artificial amplification of content from human, bot, and cyborg (or mixed human-bot) accounts. The study also showed evidence the U.S. used all five messaging categories included in the study: Support, attack the opposition, distract, driving divisions, and suppression.

Bradshaw says that social media companies should do more to create a better place to connect and discuss politics. “Determining whether a post is part of a manipulation campaign is no easy task. It often requires looking at broad trends across social media and the conversation that is taking place about a particular topic,” she said.

While Bradshaw says detecting misinformation shouldn’t be left solely to the user, some misinformation can be picked up by looking for accounts that post in multiple languages, conducting reverse image searches, and using free online tools to detect automated accounts. 

The 2019 study highlights changes in political propaganda that existed long before the internet, but has likely been leveraging social media for a decade. The study authors end the report with a question:“Are social media platforms really creating a space for public deliberation and democracy? Or are they amplifying content that keeps citizens addicted, disinformed, and angry?”

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Bluesky barrels toward 1 million new sign-ups in a day
Bluesky social media app logo.

Social media app Bluesky has picked nearly a million new users just a day after exiting its invitation-only beta and opening to everyone.

In a post on its main rival -- X (formerly Twitter) -- Bluesky shared a chart showing a sudden boost in usage on the app, which can now be downloaded for free for iPhone and Android devices.

Read more
How to make a GIF from a YouTube video
woman sitting and using laptop

Sometimes, whether you're chatting with friends or posting on social media, words just aren't enough -- you need a GIF to fully convey your feelings. If there's a moment from a YouTube video that you want to snip into a GIF, the good news is that you don't need complex software to so it. There are now a bunch of ways to make a GIF from a YouTube video right in your browser.

If you want to use desktop software like Photoshop to make a GIF, then you'll need to download the YouTube video first before you can start making a GIF. However, if you don't want to go through that bother then there are several ways you can make a GIF right in your browser, without the need to download anything. That's ideal if you're working with a low-specced laptop or on a phone, as all the processing to make the GIF is done in the cloud rather than on your machine. With these options you can make quick and fun GIFs from YouTube videos in just a few minutes.
Use GIFs.com for great customization
Step 1: Find the YouTube video that you want to turn into a GIF (perhaps a NASA archive?) and copy its URL.

Read more
I paid Meta to ‘verify’ me — here’s what actually happened
An Instagram profile on an iPhone.

In the fall of 2023 I decided to do a little experiment in the height of the “blue check” hysteria. Twitter had shifted from verifying accounts based (more or less) on merit or importance and instead would let users pay for a blue checkmark. That obviously went (and still goes) badly. Meanwhile, Meta opened its own verification service earlier in the year, called Meta Verified.

Mostly aimed at “creators,” Meta Verified costs $15 a month and helps you “establish your account authenticity and help[s] your community know it’s the real us with a verified badge." It also gives you “proactive account protection” to help fight impersonation by (in part) requiring you to use two-factor authentication. You’ll also get direct account support “from a real person,” and exclusive features like stickers and stars.

Read more