“Content moderation” might sound like a commonplace academic or editorial task, but the reality is far darker. Internationally, there are more than 100,000 people, many in the Philippines but also in the U.S., whose job each day is to scan online content to screen for obscene, hateful, threatening, abusive, or otherwise disgusting content, according to Wired. The toll this work takes on moderators can be likened to post-traumatic stress disorder, leaving emotional and psychological scars that can remain for life if untreated. That’s a grim reality seldom publicized in the rush to have everyone engaged online at all times.
Twitter, Facebook, Google, Netflix, YouTube, and other companies may not publicize content moderation issues, but that doesn’t mean the companies aren’t working hard to stop the further harm caused to those who screen visual and written content. The objective isn’t only to control the information that shows up on their sites, but also to build AI systems capable of taking over the most tawdry, harmful aspects of the work. Engineers at Facebook recently stated that AI systems are now flagging more offensive content than humans, according to TechCrunch.
Joaquin Candela, Facebook’s director of engineering for applied machine learning, spoke about the growing application of AI to many aspects of the social media giant’s business, including content moderation. “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people,” Candela said. “The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.”
The systems aren’t perfect and mistakes happen. In a 2015 report in Wired, Twitter said when its AI system was tuned to recognize porn 99 percent of the time, 7 percent of those blocked images were innocent and filtered incorrectly, such as when photos of half-naked babies or nursing mothers are screened as offensive. The systems will have to learn, and the answer will be deep-learning, using computers to analyze how they, in turn, analyze massive amounts of data.
Another Facebook engineer spoke of how Facebook and other companies are sharing what they’re learning with AI to cut offensive content. “We share our research openly,” Hussein Mehanna, Facebook’s director of core machine learning, told TechCrunch. “We don’t see AI as our secret weapon just to compete with other companies.”