For years, companies such as Facebook and Twitter have taken a somewhat laissez-faire approach to moderating what’s posted to their platforms. Even in the wake of the Cambridge Analytica scandal, which underlined the enormous influence these platforms can have on politics, economics, and online discourse, social networks have largely relied on piecemeal efforts to stop malicious groups from abusing their platforms.
In the absence of any overarching and regularly enforced moderation policy, and by actively avoiding being “arbiters of truth,” these platforms have continued to fester, enabling hate speech and misinformation to run rampant.
But the tides are seemingly starting to turn. Recently, social networks have begun dropping the hammer and actively revising their policies on hate speech, fact-checking political leaders, and more. In other words, social media platforms are finally stepping up and governing themselves.
No longer turning a blind eye to political voices
Since May, Twitter, after years of letting President Donald Trump run lawless, has banned and flagged several of his tweets for reasons ranging from publishing misleading content to glorifying violence. Earlier this month, Facebook backtracked on its iron-willed stand against taking action on posts from political leaders and said the social network will now label content no matter how newsworthy it is.
Reddit is turning up its community oversight efforts as well. As part of a major hate speech purge, the company took down the subreddit r/The_Donald, one of the platform’s largest and most controversial political communities, because it had devolved into a cesspool of harassment and hateful posts.
Similarly, Amazon’s streaming video platform, Twitch, temporarily suspended the Trump administration’s channel for violating its policies against hateful content. A few weeks ago, Snapchat announced it would no longer promote the president’s account. Most recently, YouTube banned a handful of prominent white supremacist channels over similar concerns.
Kicking off a new era of social media moderation
Considering this wave of self-moderation, it is safe to say that online platforms are, at long last, behaving as the mini-governments that they truly are.
One of the key factors behind this mass shift is Trump’s crackdown on Section 230, a piece of legislation that prevents social media companies from being held responsible for the content they host. The new executive order seeks to strip these protections off and bring more oversight to how giants like Twitter moderate their platforms.
That executive order, which was said to be rushed as a retaliatory move against Twitter, has essentially backfired and proved that the president has little sway over social networks. Despite the fact that the executive order changed nothing, however, the increased scrutiny it inspired has pushed many social media companies to ensure their platforms are free of hate speech or any sort of objectionable content. And that’s what has seemingly happened over the following weeks.
But that’s not all. The police killing of George Floyd has spurred a wave of protests across the nation and the activism hasn’t left tech companies unscathed. For the first time, Facebook employees publicly criticized the company, and a handful even quit. Reddit co-founder Alexis Ohanian stepped down from the company’s board and urged it to fill his seat with a Black candidate.
For Facebook, the tipping point was likely the advertiser exodus. In the last month, the social network has faced an organized boycott by some of its biggest advertisers including Target, Microsoft, Starbucks, Unilever, and several more — citing the company’s tendency to let hate groups flourish. How deep of a dent did they leave? Facebook, after a two-day stock decline, lost nearly $60 billion in market value.
Then there’s the more political angle. With Trump’s deteriorating position in the polls, tech companies may feel safer in taking action against him as his term nears its end. These are, lest we forget, the same platforms who refused to act on a tweet in which Trump threatened to nuke North Korea three years ago. By rolling out these long-overdue updates at such a juncture, tech companies will also partially escape scrutiny if they were to land in a similar position as the 2016 elections later this year.
Will this politically motivated shift last?
While these efforts are steps in the right direction, they unfortunately highlight a worrying truth about online platforms: They’re still more reactive than proactive. The majority of these policy changes and updates only apply to areas that, at the moment, threaten online platforms’ positions.
For instance, there’s a growing Reddit community of about 140,000 members that discusses all the ways the social network’s policies are being abused in topics that are not in focus. Facebook banned dozens of anti-government extremist pages, but a report by BuzzFeed News pointed out how the company has been making money from “boogaloo” accounts through ads.
Glaring issues such as harassment continue to plague social networks across the world, and on any given day, it’s not uncommon to see offensive hashtags trending. Twitter took down initial tweets of the “Plandemic” video when it made headlines but in the following days, the video has continued to resurface and the social network has refused to take down the conspiracy handles spreading them.
Online platforms are more intertwined with politics than ever. But while this latest round of updates is a welcome development, tech companies will have to bring in more systemic changes to stay ahead of the curve instead of simply catering to new controversies. Because in an increasingly online-first world, Big Tech can’t afford to lag behind malicious actors and trends.