Skip to main content
  1. Home
  2. Social Media
  3. News

Twitter expands muting and filtering tools, uses algorithms to track abuse

Add as a preferred source on Google

Twitter is following up on the steady stream of safety updates it has rolled out over the past weeks with more measures aimed at stamping out abuse.

Unlike in the past, the company is now taking matters into its own hands by deploying algorithms to identify abusive behavior. But that doesn’t mean it is doing away with its safety tools that allow users to customize what they see.

Recommended Videos

On Wednesday, Twitter announced new controls that center on the notifications tab (where users receive alerts regarding new followers, re-tweets, likes, and mentions). The platform now lets you activate a new set of advanced filters that can block notifications from certain types of accounts.

The options consist of broader controls — for example, turning off notifications from all accounts you don’t follow — to filters that target anonymous, and potentially nefarious, accounts, including those lacking a profile photo, verified email address, or phone number.

Image used with permission by copyright holder

Twitter is also expanding its mute feature that lets users remove certain keywords, phrases, or entire conversations from notifications. Now, you can apply the muting options to your home timeline and can enable them for a select amount of time (such as 24 hours, seven days, a month, or permanently). Twitter claims that both the new updates were highly requested features from its community.

In its safety blog post detailing the new initiatives, Twitter admitted to using machine learning systems to track abusive behavior. Twitter’s algorithms are behind the recent timeouts it has been placing on select accounts  — which essentially limits the visibility of the alleged offender’s activity on the platform.

Having kept quiet about the change, Twitter engineering vice president Ed Ho officially confirmed it in the blog post. Ho claims the company uses its own (human) judgment to act on the accounts targeted by its algorithms and that the timeout is only placed on accounts that repeatedly tweet abuse at non-followers. The Twitter exec also admits the new tools could be prone to error at the outset but will improve over time.

The company has thus far refrained from providing information about its algorithms. When quizzed by Digital Trends on its increasing reliance on machine learning, a Twitter spokesperson claimed no specifics were being offered because the platform didn’t want anyone to take advantage of a system built to ensure safety.

“Our platform supports the freedom to share any viewpoint, but if an account continues to repeatedly violate the … rules, we will consider taking further action,” Ho wrote in the blog post.

Twitter is also promising to bring more transparency to the abuse reporting process. It claims that users who get in touch about policy violations will receive notifications via the Twitter mobile app when their report is received and if the company decides to take further action.

Saqib Shah
Saqib Shah is a Twitter addict and film fan with an obsessive interest in pop culture trends. In his spare time he can be…
This Chrome extension makes it easier to trust your X feed
No more tapping into profiles just to see where someone’s “based in.”
X on Android phone.

What’s happened? X has recently added a “Based in” field that allows you to view the location of someone’s profile. Following that, RhysSullivan, an independent developer, has created a Chrome extension that takes those profile tags and shows them as tiny national flags directly in the feed, so there’s no need to open a profile to see an account’s country. The extension does this by calling X’s own API endpoints from the browser context while you’re logged in.

The extension identifies usernames on a feed page, then calls X’s GraphQL AboutAccountQuery to request the account_based_in field.

Read more
Check this new social app which lets you spoil your favourite TV shows and books
Do you want a safe place to spoil that episode? Phictly lets you and your friends pick the pace
Representative Image of books

What Happened: Remember how the internet used to feel?

That vibe of finding a tiny corner where people were just as obsessed with a show as you were?

Read more
Lawsuit claims Meta stopped research showing users felt better after leaving Facebook
Court filing alleges Meta hid findings that Facebook breaks improved user well-being
Facebook App Icon

What Happened: Meta is back in the hot seat, and this time it’s over allegations that it buried its own research about Facebook's impact on mental health.

A new, unredacted legal filing just hit the public eye, and it claims that back in 2019, Meta launched an internal study called Project Mercury.

Read more