On Monday, October 2, Facebook shared data with Congress on over 3,000 Facebook political ads that came from fake Russian accounts during and after the 2016 presidential election. The social media platform has since shared that data with users, along with what steps the company is taking next to curb similar attempts in the future.
Facebook says that it found more than 3,000 ads that came from inauthentic accounts linked to a Russian group called the Internet Research Agency (IRA) that operated between 2015 and 2017. Some 10 million people in the U.S. viewed at least one of those ads, with around 44 percent of those views happening before the 2016 presidential election. The ads, as well as the spread of fake news during the election, have brought the social media platform under scrutiny.
In an attempt to bolster transparency, Facebook is now giving users the option of seeing what Facebook accounts and Instagram pages were associated with IRA. In a recent news update in November 2017, the social network announced the creation of a portal that would show users what IRA content they had liked or followed between January 2015 and August 2017. The portal is now up and running, and can be found in the Facebook Help Center.
“It is important that people understand how foreign actors tried to sow division and mistrust using Facebook before and after the 2016 U.S. election,” Facebook noted in its update about the new tool. “That’s why as we have discovered information, we have continually come forward to share it publicly and have provided it to congressional investigators.”
You’ll only be able to use the tool if you did, in fact, directly follow an account set up by a Russian troll on Facebook or Instagram. The tool won’t show you content you may have seen simply because a friend hit the “Like” button, thereby displaying it on your News Feed. Indeed, Facebook noted, it would be “challenging” to identify every single one of the 140 million or so users who likely saw content or ads from the Kremlin.
Facebook said its advertising guidelines are designed to prevent abuse without inhibiting free speech. For example, preventing advertisers from advertising globally to other countries would prevent organizations like UNICEF and Oxfam from communicating with global audiences. All of the ads in question, however, violated policy because they came from inauthentic accounts, but Facebook says the content inside some of them would have been approved if the information had come from an authentic account.
“We strongly believe free speech and free elections depend upon each other,” Schrage wrote. “We’re fast developing both standards and greater safeguards against malicious and illegal interference on our platform. We’re strengthening our advertising policies to minimize and even eliminate abuse. Why? Because we are mindful of the importance and special place political speech occupies in protecting both democracy and civil society. We are dedicated to being an open platform for all ideas — and that may sometimes mean allowing people to express views we — or others — find objectionable. This has been the longstanding challenge for all democracies: how to foster honest and authentic political speech while protecting civic discourse from manipulation and abuse.”
With the data, Facebook also shared a list of next steps the platform is taking to catch ads like those from the so-called Internet Research Agency — which violated Facebook policy but ran anyway — in the future. Facebook will be adding 1,000 people to its staff to manually review more ads, looking at content as well as context and targeted demographics. Ads that target certain demographics will automatically be flagged for manual review, Facebook says, after both the election ads and the inappropriate user-typed demographics that snuck into the system. The platform currently uses both algorithms and human reviewers, reviewing millions of ads every week.
The platform is also taking steps to help users better determine where an ad came from. In the name of transparency, users will soon be able to click on an ad targeted to them and also view ads targeted toward other demographics, a feature that Facebook is currently building. Expanded advertising policies are also coming. For Pages that want to run ads related to U.S. federal elections, Facebook will be requiring more documentation confirming the business or organization. The social media giant is also reaching out to industry leaders and other governments to establish industry standards, continuing efforts like the partnership with Twitter, Microsoft, and YouTube designed to fight extremist content.
“The 2016 U.S. election was the first where evidence has been widely reported that foreign actors sought to exploit the internet to influence voter behavior,” wrote Elliot Schrage, Facebook’s vice president of policy and communications. “We understand more about how our service was abused and we will continue to investigate to learn all we can. We know that our experience is only a small piece of a much larger puzzle.”
Update: Facebook now shows you what IRA-run pages and accounts users followed.
- Should we put a tax on Facebook to keep journalism alive?
- Facebook axes fake accounts pretending to be legitimate media organizations
- Facebook shoots down Pages, ads sharing vaccine misinformation
- Facebook says the future is private, but what does that mean?
- Facebook will ban content supporting white nationalism and separatism