It happened slowly — and then all at once. Has the tech reckoning against Donald Trump finally arrived?
Let’s look at the stats: Donald Trump — who has been a user on Twitter since March 2009 — he has been President of the United States since January 20, 2017, or 1,257 days as of the publication of this article.
For most of those days, he was free to roam over social media, sharing often offensive and inflammatory comments, posts and tweets, as well as frequently retweeting conspiracy theorists. But for the vast majority of his presidency, Trump was protected by his status as president. His statements were so newsworthy, platforms said, it would be against the public interest to interfere with them.
Then something changed.
On day 1,165 of Trump’s presidential life, Twitter began flagging his tweets — in one case for “glorifying violence,” in another for spreading misinformation about mail-in voting. Trump — long itching for a fight — hit back, signing an executive order that attempted to curtail the protections social media platforms rely on to shield them from lawsuits related to the content posted on their sites.
But Trump’s move wasn’t enough to stop the drip-drip-drip of social media sites deciding they had had enough. On day 1,231, Snapchat’s parent company Snap announced it would no longer promote the presidential account on its discover page because of comments that promote “racial violence and injustice.”
On day 1,254, Facebook — the largest platform of all of these — took down a Trump campaign ad that appeared to use Nazi imagery, and announced it would start flagging any and all of the president’s posts that it felt broke its rules. Facebook’s decision came after it faced a growing advertiser backlash over its hate speech policies.
And now on day 1,257, popular streaming service Twitch has temporarily suspended the president’s campaign account for “hateful conduct.”
It’s not just the president, although he is the most high-profile example of what may be a culling of right-wing inflammatory figures on social media sites. In the same breath, YouTube also took down several other prominent right-wing and alt-right figures including Richard Spencer and David Duke, the former head of the Ku Klux Klan.
Reddit today has also suspended the notorious pro-Trump subreddit r/The_Donald for “frequent rule-breaking.” The group had an average of 7,780 daily users and more than 790,000 subscribers and was considered a bastion of pro-Trump conspiracy theories, as well as racist, misogynistic, and Islamaphobic content.
Looking at the unfurling of the consequences of the president’s social media actions, it certainly appears like this backlash could be a turning point.
Twitter might not have the most number of active users of all the popular social media sites — according to MuckRack, it has 386 million active monthly users (AMU), compared with Facebook’s 2.6 billion AMU — but as a popular site with many prominent media figures, its corporate decisions have massive clout. Twitter’s move provided the cover that other platforms needed to take the steps they had long been pressured to take, including curbing hateful rhetoric, especially hate speech coming from on high.
So, at last, has the president finally been brought down to the level of the rest of us, who are held accountable for the stuff we say in public, even if that accountability might be somewhat unevenly enforced? Whether that drip-drip-drip of turns into the tsunami that could finally cleanse a lot of these platforms of their more unsavory elements remains to be seen.
Much like the controversies around “cancel culture,” this will only work if the rules continue to apply to Trump. It would not behoove Twitter to start flagging his tweets and then mysteriously stop.
Similarly, if Snap began promoting his account again, or if Facebook ceased moderating his content, it will lend strength to the narrative that Trump and his supporters love to peddle: That they have triumphed over a biased and malevolent social media monster, and that they are stronger than the machinery working against them.
In reality, there is no such machinery. Just a bunch of private companies that tolerated questionable and hateful content for more than 1,200 days before deciding to do something about it. For the good of the internet — and possibly even U.S. democracy — let’s hope it stays that way.
- Human moderators can’t stop online hate speech alone. We need bots to help
- It’s time for social media platforms to grow up