Twitter is testing a new feature that gives you the chance to reword a potentially offensive reply before you post it.
The move comes as part of ongoing efforts by the social media company to rid its platform of abuse and bullying.
Currently an experiment for select iPhone users, a short message will appear if Twitter’s machine-learning smarts deem your intended reply to be potentially offensive. In other words, if your response is peppered with expletives or contains the kind of language often associated with harassment, Twitter will ask you if you want to reconsider expressing it in more, shall we say, diplomatic terms.
“When things get heated, you may say things you don’t mean,” the company said in a tweet announcing the anti-abuse test. “To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”
When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
It should be emphasized that the feature is currently in a test phase and so may never become a permanent part of Twitter. But if the company’s data shows it to have a positive effect, we can expect it to be rolled out more widely in the near future.
Twitter isn’t the first social media app to use such a system. Instagram, for example, launched a similar tool last year that also uses machine learning to detect offensive language in comments before they’re posted. If Instagram’s software detects any potentially offensive words, it’ll ask the poster if they want to think again before hitting the send button. More recently, it expanded the tool to captions for feed posts.
Twitter says it prohibits abuse, harassment, and other “hateful conduct” on its platform, but it can only act against a user once the content has been posted. This has led to widespread criticism over the years that it’s failing to effectively address the issue, prompting some to quit the platform. The company, however, insists it’s working constantly to clean up the service with a steady flow of new features and support systems.
Those who experience abuse on Twitter can report the offender to the company. Blocking users or making use of an array of muting options is also possible. If the abuse is particularly alarming, such as threats of violence, Twitter recommends you also contact law enforcement. More information on how to deal with abuse can be found on Twitter’s website.
- YouTube will prompt you to reword potentially offensive comments
- What is Section 230? Inside the legislation protecting social media
- Twitter launches Stories-like fleets that disappear after 24 hours
- Twitter bug causes Fleets to remain visible after 24 hours
- How to run a free background check