Skip to main content

Facebook increasingly using AI to scan for offensive content

facebook ai screen offensive computer male facing dark background
Image used with permission by copyright holder
“Content moderation” might sound like a commonplace academic or editorial task, but the reality is far darker. Internationally, there are more than 100,000 people, many in the Philippines but also in the U.S., whose job each day is to scan online content to screen for obscene, hateful, threatening, abusive, or otherwise disgusting content, according to Wired. The toll this work takes on moderators can be likened to post-traumatic stress disorder, leaving emotional and psychological scars that can remain for life if untreated. That’s a grim reality seldom publicized in the rush to have everyone engaged online at all times.

Twitter, Facebook, Google, Netflix, YouTube, and other companies may not publicize content moderation issues, but that doesn’t mean the companies aren’t working hard to stop the further harm caused to those who screen visual and written content. The objective isn’t only to control the information that shows up on their sites, but also to build AI systems capable of taking over the most tawdry, harmful aspects of the work. Engineers at Facebook recently stated that AI systems are now flagging more offensive content than humans, according to TechCrunch.

Joaquin Candela, Facebook’s director of engineering for applied machine learning, spoke about the growing application of AI to many aspects of the social media giant’s business, including content moderation. “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people,” Candela said. “The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.”

The systems aren’t perfect and mistakes happen. In a 2015 report in Wired, Twitter said when its AI system was tuned to recognize porn 99 percent of the time, 7 percent of those blocked images were innocent and filtered incorrectly, such as when photos of half-naked babies or nursing mothers are screened as offensive. The systems will have to learn, and the answer will be deep-learning, using computers to analyze how they, in turn, analyze massive amounts of data.

Another Facebook engineer spoke of how Facebook and other companies are sharing what they’re learning with AI to cut offensive content. “We share our research openly,” Hussein Mehanna, Facebook’s director of core machine learning, told TechCrunch. “We don’t see AI as our secret weapon just to compete with other companies.”

Editors' Recommendations

Bruce Brown
Digital Trends Contributing Editor Bruce Brown is a member of the Smart Homes and Commerce teams. Bruce uses smart devices…
Meta just created a Snoop Dogg AI for your text RPGs
Meta AI's Dungeon Master looks like Snoop Dogg.

Meta Connect started with the Quest 3 announcement but that’s not the only big news. The metaverse company is also a leader in AI and has released several valuable models to the open-source community. Today, Meta announced its generative AI is coming soon to its social media apps, and it looks both fun and useful.
Meta AI for text
When CEO Mark Zuckerberg announced Meta AI for social media, it seemed interesting. When one of the custom AIs looked like Snoop Dogg wearing Dungeons and Dragons gear, there was a gasp from the live audience, followed by whoops of joy and applause.

Meta AI's Dungeon Master looks like Snoop Dogg. Meta

Read more
Hollywood writers strike ends after agreement on AI and other issues
The Hollywood sign.

The Writers Guild of America (WGA) has called off its strike after five months during which a slew of popular shows were taken off air.

The WGA said it had reached a tentative agreement with the Alliance of Motion Picture and Television Producers on a new three-year Minimum Basic Agreement (MBA), and voted unanimously to recommend it to its 11,500 members.

Read more
Spotify using AI to clone and translate podcasters’ voices
spotify app available in windows 10 store

Spotify has unveiled a remarkable new feature powered by artificial intelligence (AI) that translates a podcast into multiple languages using the same voices of those in the show.

It’s been made possible partly by OpenAI’s just-released voice generation technology that needs only a few seconds of listening to replicate a voice.

Read more