TikTok’s popularity has soared in the last few months but that has come at a cost: Its content moderation team is struggling to keep the video platform free of spam and malicious content. As per TikTok’s latest transparency report, it had to take down more videos than ever in the first half of 2020 (January-June) for violating its guidelines and fielded an increasing number of government requests for user information.
Over 104 million videos were removed from TikTok across the world in the first six months of this year, more than double from the second half of 2019. About 37 million of these were from India followed by nearly 10 million in the United States.
TikTok says this is still less than 1% of the total number of videos uploaded on its app. In the report, it adds that it took action on 96.4% of the removed clips before a user reported them and 90.3% of them didn’t have any views. The service’s algorithms automatically took care of and discarded 10 million of these clips.
“As a result of the coronavirus pandemic, we relied more heavily on technology to detect and automatically remove violating content in markets such as India, Brazil, and Pakistan,” TikTok wrote in a blog post.
TikTok’s content moderation practices haven’t always been effective, however. Earlier this month, the company was scrambling to suppress the spread of a viral, gruesome video that showed a man taking his own life with a gun.
With over 100 million users in the U.S. alone, TikTok is now also a significant potential resource for law enforcement agencies looking for personal data in investigations. In the U.S., TikTok received 226 requests from law enforcement or government entities for user information and content restrictions, substantially up from 100 from the six months before 2020, and the company agreed to comply with 85% of these cases. In India, that figure was in four digits.
TikTok has been repeatedly criticized for censoring content that’s critical of China. However, those stats are not available in this report since Bytedance, the China-based owner of TikTok, operates a separate, localized alternative called Duoyin in China. To fend off these accusations, back in March this year, TikTok formed a new committee of experts to offer more transparency in its content moderation process and seek “unvarnished views” on the social video app’s policies.
Alongside its latest transparency report, TikTok also published a proposal for a global coalition encompassing nine social and content platforms to take a collaborative approach against hate speech and other harmful content. In a public letter you can read here, TikTok’s interim global chief, Vanessa Pappas, suggests the safety teams of these social media companies should keep each other notified of any harmful content that they may have identified on their respective platform and that has the potential of proliferating through the rest of the internet. And last weekend, in a tweet, Pappas called on Facebook and Instagram to join its legal challenge against the Trump administration.
We’ve reached out to TikTok and other major social media platforms for a comment and we’ll update the story when we hear back.
- What is Section 230? Inside the legislation protecting social media
- Facebook expands its ban on QAnon conspiracy theory accounts
- TikTok vows to challenge Trump’s ‘unjust’ ban
- Snapchat’s new TikTok-like feature will share $1M among the best creators
- What the biggest tech companies are doing to make the 2020 election more secure