It’s been a long time coming, but mainstream technology and social media have recently taken a far more concerted stance against pornography — witness Facebook’s stringent standards against nudity and sexual content, Google’s removal of revenge porn results from search, and now, Twitter’s acquisition of artificial intelligence company Madbits, which recognizes and flags pornography to make your Twitter feed safer for work.
This is the first major project milestone Madbits has achieved since joining the social media giant in 2014, but it’s a significant one. Now, the system is capable of identifying pornographic content with a stunningly high rate of accuracy, and has significant potential in reducing the number of human hours spent poring over and removing graphic content from sites like Twitter, Facebook, Google+, and the like.
The enterprise started in 2013 as the project of Clément Farabet, a research scientist from New York University. During its year as an independent entity, Madbits “built visual intelligence technology that automatically understands, organizes and extracts relevant information from raw media.” Along with co-founder Louis-Alexandre Etezad-Heydari, Farabet and the Madbits team created their “technology based on deep learning, an approach to statistical machine learning that involves stacking simple projections to form powerful hierarchical models of a signal.”
Naturally, this highly sophisticated software caught the attention of some pretty heavy hitters, and while no one else seemed entirely sure of how to apply this revolutionary new tech, Twitter immediately put the Madbits team to the test, asking them to identify inappropriate and pornographic images that would often appear on the platform. As Alex Roetter, the SVP of Engineering at Twitter told Wired, “When you do an acquisition — even though they’re coming in to do something broad — you want to give them something specific, so you get to know each other and make sure the acquisition works. So we gave them the problem of NSFW.”
And now, that acquisition has certainly paid off. As Wired reported last year, hundreds of thousands of overseas laborers do the due diligence, the dirty work, that allows our social media platforms to function at the relatively PG-13 level we’re accustomed to. They spend hours ensuring that graphic content — sexual, violent, or otherwise — does not impinge on our sheltered views of the world, potentially at the cost of their own mental and psychological well-being (consider having to look at Justin Beiber’s recently Instagrammed posterior as a condition of your employment).
But with the help of Madbits, which can identify NSFW images with just a 7 percent margin of error, these workers may be relieved of some of these grim duties.
The entire enterprise sheds a new light on the possibilities of machine learning and how AI may help further technological advancements. As Roetter notes, “You need humans, generally, to label the data. But then, going forward, the model is applied to cases you’ve never seen before, so you dramatically cut down the need for people. And it’s lower latency, of course, because the model can do it in real-time.”
And as machines get better at flagging this content, the need for human interaction with such drivel will hopefully continue to decline.