YouTube may be the internet’s preeminent source of tutorials, vloggers, news, and viral videos — indeed, YouTube’s users consume a collective 4 billion videos per day and upload 300 hours of footage every minute — but it’s also one of the most acerbic. YouTube’s comment section has a tendency to be come unruly and ostracizing, sometimes to the point of abusiveness. In attempt to curb those and other forms of bullying that plague the site’s forums, the video service is rolling out a new tool that helps users to flag potentially offensive content.
Then new feature, which debuted in beta on Thursday, relies on algorithmic intelligence to identify comments which run afoul of YouTube’s Community Guidelines. Then, creator’s take charge: Those who opt in can approve, hide, or report the comments to YouTube’s moderation team.
It is in many ways an extension of Google’s existing anti-harassment tool, which parsed comments for flagrant, inappropriate, or potentially hurtful words. But unlike that feature, which merely institutes a blanket ban on certain turns of phrase, Google’s new tool looks deeper: It compares the characteristics of comments that have been removed by creators in the past with those of comments under review.
“We recognize that the algorithms will not always be accurate: The beta feature may hold some comments you deem fine for approval, or may not catch comments you’d like to hold and review,” YouTube product manager Courtney Lessard said in a blog post. “When you review comments, the system will take that feedback into account and get better at identifying the types of comments to hold for review.”
In addition to the new tool, YouTube’s improving its existing moderation suite with enhanced functionality. Now, creators can add other users as moderators to a YouTube channel and add color to the text of their usernames (presumably to ensure those moderators remain easy to spot). And they can “pin” comments to the top of a video’s comment section to direct attention to special announcements, and “heart” contributions to the conversation they find particularly noteworthy.
The changes come on the heels of YouTube’s Heroes program, a gamified initiative that saw designated users granted the ability to moderate and flag inappropriate or abusive videos for review. Heroes lack the ability to hide or remove videos or comments, though — YouTube staff retains those powers.
YouTube may be the latest social network to roll out new tools aimed at combating hostile online behavior, but it’s far from the first. Earlier this year, exchange board Reddit introduced a blocking tool that allows users to hide the activity potential abusers. In August, Microsoft launched two new tools, one for reporting forms of hate speech to moderators and the other for contesting those accusations, to many of its online services. And Instagram recently began offering a tool that automatically censors comments containing abusive words.
That’s good news. Internet bullying’s a growing problem — as many as 40 percent of internet users experience harassment at one point or another.
“[We’re] dedicated to making your conversations with your community easier and more personal,” Lessard said. “We’re excited to see how you use these features to grow stronger communities and have more constructive conversation in your comment section.
- Social (Net)Work: What can A.I. catch — and where does it fail miserably?
- Logan Paul’s graphic YouTube video may have cleared initial human reviewers
- Governments are stepping in to regulate social media, but there may be a better way
- 9 things you need to know about the Russian social media election ads
- Here’s what social media giants are doing to keep extremism off your screen