Skip to main content

Can social media predict mass shootings before they happen?

Dayton Mass Shooting
Mourners attend a memorial service in the Oregon District to recognize the victims of an early-morning mass shooting in the popular nightspot in Dayton, Ohio. Scott Olson / Getty Images

In the wake of two mass shootings over the weekend that left at least 31 people dead, President Donald Trump called on social media companies like Twitter and Facebook to “detect mass shooters before they strike.”

“I am directing the Department of Justice to work in partnership with local state and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike,” Trump said in a White House speech in response to the shootings Monday morning.

The weekend’s violence included a shooting at a Walmart in El Paso, Texas on Saturday that left 22 people dead and another that left 10 dead, including the shooter, early Sunday morning in Dayton, Ohio. The alleged shooter in El Paso posted a racist manifesto full of white supremacist talking points on 8chan, a hate-filled online message board. As far as authorities have been able to tell, both shooters didn’t post a warning on mainstream social networks.

Trump’s vague directive shifts some of the blame for gun violence onto social media companies, who run massive platforms and can sift through the personal data of billions of people. But there’s a difference between tipping authorities off when someone posts a concrete threat of violence and social media companies using algorithms and their massive troves of data to identify who could potentially be a shooter.

Companies like Google, Facebook, Twitter, and Amazon already use algorithms to predict your interests, your behaviors, and crucially, what you like to buy. Sometimes, an algorithm can get your personality right – like when Spotify somehow manages to put together a playlist full of new music you love. In theory, companies could use the same technology to flag potential shooters.

“To an algorithm, the scoring of your propensity [to] purchase a particular pair of shoes is not very different from the scoring of your propensity to become mass murderer — the main difference is the data set being scored,” wrote technology and marketing consultant Shelly Palmer in a newsletter on Sunday.

But preventing mass shootings before they happen raises some thorny legal questions: how do you determine if someone is just angry online rather than someone who could actually carry out a shooting? Can you arrest someone if a computer thinks they’ll eventually become a shooter?

A Twitter spokesperson wouldn’t say much directly about Trump’s proposal, but did tell Digital Trends that the company suspended 166,513 accounts connected to the promotion of terrorism during the second half of 2018. Twitter’s policy doesn’t allow specific threats of violence or wishes “for the serious physical harm, death, or disease of an individual or group of people.”

Twitter also frequently works to help facilitate investigations when authorities request information – but the company largely avoids proactively flagging banned accounts (or the people behind them) to those same authorities. Even if they did, that would mean flagging 166,513 people to the FBI – far more people than the agency could ever investigate.

We reached out to Facebook to get details on how it might work with federal officials to prevent more mass shootings, but they didn’t get back to us. That said, the company has a tricky history when it comes to hate speech and privacy. Facebook has a detailed hate speech and violence policy, but the decision about whether to remove content or ban a user is left to subjective content moderators.

Even if someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip up either Twitter or Facebook’s policies. The man who killed three people at the Gilroy Garlic Festival in Northern California posted to Instagram from the event itself – once calling the food served there “overprices” and a second that told people to read a 19th-century pro-fascist book that’s popular with white nationalists.

A simple search of both Twitter and Facebook will turn up similar anti-immigrant and anti-Hispanic rhetoric to that found in the El Paso shooter’s manifesto. The alleged shooters behind the Pittsburgh synagogue attack in 2018 and the Christchurch mosque shootings in March also both expressed support for white nationalism online. Companies could use algorithms to detect and flag that sort of behavior as an indicator that someone would be a mass shooter, but it would require an extensive change to their existing policies. Essentially, they’d need to ban accounts (or flag them to authorities) before anyone makes a solid threat against another person or group.

There’s also the question of whether algorithms can get it right. The Partnership on AI, an organization looking at the future of artificial intelligence, conducted an intensive study on algorithmic tools that try to “predict” crime. Their conclusion? “These tools should not be used alone to make decisions to detain or to continue detention.”

“Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data,” the organization wrote in its report.

We’ve already seen what can happen when an algorithm gets it wrong. Sometimes, it’s innocuous, like when you see an ad for something you’d never buy. Third parties can exploit algorithms as well: some far-right extremists used the YouTube algorithm to spread an anti-immigrant, white supremacist message until YouTube changed its hate speech policy in June. Preventing radicalization is one thing – predicting potential future crimes is another.

We’ve reached out to the Department of Justice to see if they had any more details on how social media companies could prevent shootings before they happen. We’ll update this story if they get back to us.

Editors' Recommendations

Mathew Katz
Former Digital Trends Contributor
Mathew is a news editor at Digital Trends, specializing in covering all kinds of tech news — from video games to policy. He…
Practically every major social app has a Stories function now. This is why
instagram launches location stories to more users 1

When Snapchat introduced the ability to post disappearing text and media over half a decade ago, no one expected that a scruffy new startup’s headlining feature would end up consuming a row of space at the top of every other social platform in a few years. But that’s exactly what has happened.

Snapchat’s Stories has flourished into a social network staple, and now the world's biggest tech companies are clamoring to bake this breakout format into their offerings. Today, a familiar row of avatars sits above all else on some of the most popular apps. You can now post these ephemeral “Stories” on Twitter, Facebook, Messenger, WhatsApp, Pinterest, YouTube, LinkedIn, and Google (for publishers), and possibly even Spotify in the near future.

Read more
Instagram merges with Messenger for easier cross-platform messaging
instagram merges with messenger for simpler messaging merge

Instagram now offers cross-app messaging and calling with Messenger.

The plan to merge the two services was first announced by parent-company Facebook in early 2019, with today’s official announcement coming after a trial period that started in mid-August, 2020.

Read more
Facebook, Instagram can soon actively search for — and block — stolen images
how to use instagram guide 2

Facebook will soon protect images with the same technology the company already uses to automatically police protected music and videos. On Monday, September 21, Facebook launched Rights Manager for Images to limited Pages, a tool that allows photographers to upload their images to a database for Facebook’s bots to search for and remove protected content. The tool works for both Facebook and Instagram and goes beyond existing reporting tools by actively looking for infringements.

Facebook Rights Manager is a system that will flag or remove a user’s video that contains copyrighted music or video content -- now that protection extends to still images as well. When the system finds stolen photos, the post could be blocked, monitored, or given proper attribution, depending on the owner’s settings.

Read more