Several hundred new accounts and thousands of new tweets representing a surge in inauthentic social media activity have sprung up around the Black Lives Matter protests happening around the U.S. The bots are spreading disinformation and conspiracies around the movement, researchers said.
The bot-tracking site Bot Sentinel found “an uptick in inauthentic activity by accounts that were already active and also new accounts created over the past several days,” founder and CEO Christopher Bouzy told Digital Trends in an email.
Most of the accounts were promoting disinformation campaigns and conspiracy theories, including such false assertions that billionaire George Soros was funding the protests and that the killing of George Floyd — whose death at the hands of a police officer sparked the protests — was a hoax.
“Disinformation is spread on both sides of the political spectrum, but regarding the protests, it is overwhelmingly one-sided and targeted toward Trump supporters and Conservatives,” Bouzy wrote.
Oumou Ly, a staff fellow at the Harvard Berkman Klein Center, noted that most of the disinformation seems to be coming from the far-right. “The online information environment is very asymmetric,” she told Digital Trends. “The right participates more often and in a more sustained way in spreading disinformation because they have more to gain politically from it. It’s part of their political strategy.”
One particularly nefarious piece of disinformation was coming from the so-called Boogaloo movement, a far-right term for inciting a race war or second civil war in the U.S.
Mutale Nkonde, a non-resident fellow at Stanford’s Digital Civil Society Lab and a fellow at Berkman Klein, said she had heard from some police sources that police were seizing weapon stashes from people who associated with the Boogaloos.
“They’ve been getting more active,” Nkonde said. “If I were doing a social network analysis, I’d say what weaves them [the far-right tweets] together is Boogaloo. They’re very mad at Black people.”
However, Twitter has warned that specter of bots shouldn’t be used as a “tool by those in positions of political power to tarnish the views of people who may disagree with them or online public opinion that’s not favorable.”
Twitter Head of Site Integrity Yoel Roth and Global Public Policy Strategy & Development Director Nick Pickles wrote in a blog post in May that some who may seem to act like bots on the site may be real people and that numerical usernames shouldn’t be used to prove a user is a bot.
“What’s more important to focus on in 2020 is the holistic behavior of an account, not just whether it’s automated or not,” the executives wrote. “That’s why calls for bot labeling don’t capture the problem we’re trying to solve and the errors we could make to real people that need our service to make their voice heard. It’s not just a binary question of bot or not — the gradients in between are what matter.”
Another researcher, Carnegie Mellon University professor of computer science Kathleen Carley, claimed she found 30%-49% of accounts tweeting about the protests were “guaranteed” to be bots at any given time.
Over the last five days of May, Carley said she saw 625,375 tweets and 413,900 users that qualified as suspected inauthentic activity.
However, Twitter and other researchers disputed these claims, saying the research cited by Carley should be peer-reviewed to ensure accuracy.
“We always support and encourage researchers to study the public conversation on Twitter – their work is an important part of helping the world understand what’s happening,” a Twitter spokesperson said. “But research reports that only analyze a limited quantity of public data and are not peer-reviewed are misleading and inaccurate.”
Update 6/5: This story has been updated to include comments from Twitter.
Update 6/4: This story has been updated to clarify that the bots are spreading disinformation and conspiracy theories.
- What the biggest tech companies are doing to make the 2020 election more secure
- Facebook removes network of Russian misinformation groups
- Facebook removes nearly 800 QAnon-related groups, pages, hashtags, and ads
- Are deepfakes a dangerous technology? Creators and regulators disagree
- How to talk to your friends and family about misinformation and conspiracy theories