YouTube’s updated anti-harassment policy now includes implied threats, as well as insults based on a person’s race, gender, expression, or sexual orientation. The new policy has been extended to all users, including YouTube creators and public officials that use the platform.
Matt Halprin, vice president of YouTube’s Global Head of Trust and Safety, announced the updates to the platform’s policy in a blog post on Wednesday.
“All of these updates represent another step toward making sure we protect the YouTube community. We expect there will continue to be healthy debates over some of the decisions and we have an appeals process in place if creators believe we’ve made the wrong call on a video,” Halprin wrote in the post.
The updated harassment policy includes prohibited content that simulates violence toward another person, as well as content that “maliciously insults someone.”
The policy also expands on how harassment can take shape to include repeated behavior across multiple videos.
“Channels that repeatedly brush up against our harassment policy will be suspended from [YouTube Partner Program], eliminating their ability to make money on YouTube,” the blog post reads.
The updates also touch on the comment section of videos, adding that all of the policy updates would also apply to comments.
YouTube hinted at these policy changes in September. In a blog post, the video platform said it removed more than 30,000 videos in August that contained hate speech content. The blog post added that an update to its current harassment policy would follow, which would represent a “fundamental shift in our policies.”
The video platform has had a year of policy updates due to a variety of different issues. In April, YouTube updated its harassment policy because of creator-on-creator harassment that was occurring on the platform.
YouTube also made updates to its anti-hate speech policy in June. The updates now remove videos that feature supremacist views, as well as videos that deny the existence of “well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary.”
In addition to harassment and hate speech, YouTube also had to change how it collects data on kid-friendly channels and how it advertises on children-related content. These changes were the result of a September settlement with the Federal Trade Commission (FTC) that resulted in the platform paying a $170 million fine.
- What is Section 230? Inside the legislation protecting social media
- Are deepfakes a dangerous technology? Creators and regulators disagree
- The best Android apps (November 2020)
- TikTok took down over 104 million videos in the first half of 2020
- The digital switch that blocks all websites from selling your personal data