After a few weeks of testing, Twitter has finally officially rolled out a new tool to address online abuse. Users now have a “muted words” option with expanded functionality within the platform’s mobile app, with Twitter announcing on Tuesday that users can now apply “mute” in their notificactions.
“We’re enabling you to mute keywords, phrases, and even entire conversations you don’t want to see notifications about, rolling out to everyone in the coming days,” Twitter said. “This is a feature we’ve heard many of you ask for, and we’re going to keep listening to make it better and more comprehensive over time.”
The tool was first spotted back in October by Twitter user @kendallnkardash, who tweeted about the tool after spotting it in the “notifications” section of the Twitter for iOS app, according to The Next Web.
— Kendall Kardashian (@KendallNKardash) October 28, 2016
Despite resembling Instagram’s all-encompassing filter — which was rolled out to general users in September — Twitter’s tool offers added customization, essentially allowing users to block out unpleasant tweets based on offensive words (such as profanities and racial slurs). However, it could also function as a moderator for all kinds of content (from specific topics to hashtags) based on a person’s preferences.
The move comes at a time when the backlash against Twitter’s perceived inaction against trolls has reached its peak. A number of high-profile users have previously abandoned the platform, or temporarily quit, having endured a torrent of bigoted abuse.
Twitter’s so-called hands-off approach relies on its users to report abuse, with complaints directed toward a dedicated team of staffers who probe the inquiries. Additionally, users have the option to mute and block others on the site. Recently, Twitter has taken a stern public stance against harassment led by its CEO Jack Dorsey, who regards the issue as a primary concern.
It is a tough balancing act for a company that has advocated the need to protect free speech on its platform. Over the decade that has passed since its creation, many feel that Twitter has failed to ensure that its users feel safe on the site. And, as detailed in a BuzzFeed report, the calls for change are coming from its biggest advocates.
“Our hateful conduct policy prohibits specific conduct that targets people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease,” Twitter noted in its announcement. “Today we’re giving you a more direct way to report this type of conduct for yourself, or for others, whenever you see it happening. This will improve our ability to process these reports, which helps reduce the burden on the person experiencing the abuse, and helps to strengthen a culture of collective support on Twitter.”
Updated on 11-15-2016 by Lulu Chang: Added news of Twitter’s official muted words rollout.
- Twitter update will help keep tweets from disappearing while they are being read
- Major Twitter hack in 2020 results in another arrest
- Twitter mulls ‘trusted friends’ feature for targeted tweets
- Details emerge about Twitter’s paid service, Twitter Blue
- What is Section 230? Inside the legislation protecting social media