Instagram is fighting back at offensive comments. In a blog post titled “Keeping Instagram a Safe Place for Self-Expression,” CEO Kevin Systrom outlined a pair of new features based on machine learning that aims to clean up responses to posts and live video.
The first is an optional filter that simply hides comments automatically determined to be toxic or abusive. It can be toggled on or off in the settings and is launching first in English, with other languages to follow. Instagram points out that if a few comments do manage to slip through the net, you are still free to delete and report them as before or turn commenting off entirely.
Another new feature uses similar technology to automatically block spam, and currently works in a multitude of languages: English, Spanish, Portuguese, Arabic, French, German, Russian, Japanese, and Chinese. This filter has actually been kicking around in some form or another since last fall, according to Wired, which states that the success behind it inspired Instagram to tackle the problem of hate speech using the same kind of algorithms.
As with anything dependent on machine learning, Systrom says these features will improve over time as more users report certain posts and share others. The company considers these measures a critical step toward fostering “kind, inclusive” communities on the network, though admits its work is “far from finished and perfect.”
Instagram is aware of the pitfalls of inadvertently curbing free speech in the name of protecting users, though the system is intelligent enough to take account of the context surrounding each communication. For example, the algorithms are more likely to favor someone you have had frequent and positive experiences with in the past than a stranger. In addition, a user who ends up leaving a blocked comment on one of your posts won’t know their comment has been hidden — preventing people from tempting the censors just for fun.
Comment filtering is just one of a number of new features that have made their way to the photo-centric social network in recent months, with the ability to archive posts recently undergoing testing.
Earlier in 2017, Twitter set foot on a similar path to curb abusive behavior by hiding offensive content and “less-relevant” replies that don’t contribute to a discussion. Unlike Instagram’s approach, the posts are still there — they have just collapsed automatically and can be viewed with an extra tap or click. Twitter also instituted a 12-hour probation-like system for users behaving violently toward non-followers and taken steps to ensure banned individuals cannot simply turn around and make another account to rejoin the network.
- The future of Facebook is Instagram
- YouTube will prompt you to reword potentially offensive comments
- How to use Instagram Reels
- And the brands played on: How the Facebook ad boycott fizzled out
- Human moderators can’t stop online hate speech alone. We need bots to help