You might want to think twice before posting something vitriolic beneath this story: Researchers from Stanford and Cornell Universities have developed an automatic algorithm to weed out online trolls and ban troublemakers from the communities they’re involved with.
The study is funded by Google, as Wired reports, and the academics working on the project were able to produce a system that only required five to 10 posts to be able to spot a troll. Bad spelling and grammar were among the key indicators, unsurprisingly, and the quality and legibility of a troll’s posts tended to degrade over time.
“We find that [antisocial users] tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users,” explains the report, which looked at the posting history of online community members who had subsequently been banned.
Another pattern spotted by the researchers was increasingly negative reactions from other users, and increasing level of antisocial behavior in response. However, the number of censored posts didn’t show any discernable pattern — some “Future-Banned Users” (FBUs) had a lot of posts pulled by moderators, but others didn’t.
During the course of the study, 1.7 million users and 40 million posts from CNN, IGN, and Breitbart were looked at over the course of 18 months. The behavior of users who were banned was compared with the behavior of users of good standing, and you can probably guess which group used more negative language and profanities.
Ultimately, the study concluded that a troll-spotting algorithm could be useful to moderators, but shouldn’t be relied on exclusively: During testing, the algorithm falsely flagged up one well-behaved user for every four trolls it spotted. The researchers also suggested that giving antisocial users a chance to redeem themselves could be more effective than an outright ban.