Facebook promised change after investigations showed as many as 126 million Americans saw political ads paid for by Russian organizations — but just how well is the social network sticking to those vows? Facebook representatives shared progress on election security in a presentation on Thursday, March 29, outlining progress and new features in an attempt to fight foreign interference, eliminate fake accounts, create more transparent ads, and curb the distribution of fake news.
Alex Stamos, Facebook’s chief security officer, said the platform is looking to fight four different types of fakes: Identities, audiences, facts, and narratives. The first aims to prevent Facebook users from concealing their identity, like the Russian Internet Research Agency run pages that had names like Heart of Texas and Army of Jesus. Fake audience refers to tactics to boost posts higher in the newsfeed by making the post appear to have a lot of interaction. Fake facts are fake news in the purest sense, while false narratives are posts that are intentionally divisive, sometimes through misleading headlines and language.
Facebook has already made changes to combat fake news that drops those posts views around 80 percent lower on average. Tessa Lyons, Facebook’s product manager, explained that once fact-checkers say a story is false, any shared links then have reduced distribution, which is where that 80 percent reduction comes from. The platform is also using signals to predict potential fake news stories to help fact-checking organizations see those links faster. In posts disputed by those fact-checkers, users that already shared will be notified while users about to share the same link will be able to see that the topic is disputed. For those stories that make it through, they will show alongside related articles.
But Lyons said fake articles are not the only type of fake news the platform is working to fight. This week, Facebook started fact-checking photos and videos too in France. Graphics didn’t previously have the same checks as links but could contain fake text, Photoshopped images or unretouched images that are misrepresented in the accompanying text. Integrating photos and videos into the progress could curb the spread of fake news even further — last year, one study suggested that a third of altered images go undetected. Facebook has already added photos and videos to the mix in France, but Facebook plans to take the graphic fact checks to additional countries.
For fake news, Facebook’s efforts also include new partnerships, including an agreement with The Associated Press to “debunk” stories related to the elections. Fact-checking partners are now live in six countries, Facebook said, and that number is expected to grow.
Facebook is also working to take the profit out of fake news, typically earned through ad revenue after a story goes viral. Stamos said the platform is working to drive up the costs of running such a website, while also decreasing the potential profits.
For fake accounts, Facebook now said it detects millions of accounts every day by looking for suspicious behavior using machine learning. Many of these pages, Facebook said, are removed before they can actually have any influence.
Facebook also isn’t going to wait around for reports of suspicious activity. A new tool, which will launch ahead of the midterms in the U.S., seeks out election-related activity. Facebook didn’t get into the specifics just how the tool works, but said the program sends suspicious Pages and posts over to staff for a manual review.
Finally, political ads will have similar standards to the ones running on TV and radio — including a label showing who paid for the ad. Before taking out a political ad, advertisers will have to be authorized. The authorization process includes three steps, including submitting a government-issued ID, typing in a code sent only by snail mail to a U.S. address and disclosing the political candidate, organization or company the advertiser represents.
Previously announced tools, including a public archive of political ads, will also be running before the mid-term elections in the U.S. Records of any ad, even without the political label, is currently being tested in Canada, where Facebook users can simply click on the “view ads” to see any ads the business has run, regardless of the audience targeted. And as announced last year, the safety staff is doubling to 20,000 people during 2018.
“We are looking ahead, by studying each upcoming election and working with external experts to understand the actors involved and the specific risks in each country,” Stamos said. “We are then using this process to guide how we build and train teams with the appropriate local language and cultural skills. At the end of the day, we’re trying to develop a systematic and comprehensive approach to tackle these challenges, and then to map that approach to the needs of each country or election.”
- What the biggest tech companies are doing to make the 2020 election more secure
- How to talk to your friends and family about misinformation and conspiracy theories
- Facebook to ban ads that claim election win before official announcement
- Are deepfakes a dangerous technology? Creators and regulators disagree
- Trump campaign used Cambridge Analytica data to suppress Black vote, leak shows