Skip to main content

Facebook increasingly using AI to scan for offensive content

facebook ai screen offensive computer male facing dark background
Image used with permission by copyright holder
“Content moderation” might sound like a commonplace academic or editorial task, but the reality is far darker. Internationally, there are more than 100,000 people, many in the Philippines but also in the U.S., whose job each day is to scan online content to screen for obscene, hateful, threatening, abusive, or otherwise disgusting content, according to Wired. The toll this work takes on moderators can be likened to post-traumatic stress disorder, leaving emotional and psychological scars that can remain for life if untreated. That’s a grim reality seldom publicized in the rush to have everyone engaged online at all times.

Twitter, Facebook, Google, Netflix, YouTube, and other companies may not publicize content moderation issues, but that doesn’t mean the companies aren’t working hard to stop the further harm caused to those who screen visual and written content. The objective isn’t only to control the information that shows up on their sites, but also to build AI systems capable of taking over the most tawdry, harmful aspects of the work. Engineers at Facebook recently stated that AI systems are now flagging more offensive content than humans, according to TechCrunch.

Recommended Videos

Joaquin Candela, Facebook’s director of engineering for applied machine learning, spoke about the growing application of AI to many aspects of the social media giant’s business, including content moderation. “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people,” Candela said. “The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.”

Please enable Javascript to view this content

The systems aren’t perfect and mistakes happen. In a 2015 report in Wired, Twitter said when its AI system was tuned to recognize porn 99 percent of the time, 7 percent of those blocked images were innocent and filtered incorrectly, such as when photos of half-naked babies or nursing mothers are screened as offensive. The systems will have to learn, and the answer will be deep-learning, using computers to analyze how they, in turn, analyze massive amounts of data.

Another Facebook engineer spoke of how Facebook and other companies are sharing what they’re learning with AI to cut offensive content. “We share our research openly,” Hussein Mehanna, Facebook’s director of core machine learning, told TechCrunch. “We don’t see AI as our secret weapon just to compete with other companies.”

Bruce Brown
Bruce Brown Contributing Editor   As a Contributing Editor to the Auto teams at Digital Trends and TheManual.com, Bruce…
Bluesky has ‘no intention’ to train generative AI on user content
Bluesky on the App Store, displayed on iPhone 16 Plus.

After adding its 16 millionth user to the platform on Friday morning, social media platform Bluesky addressed concerns from the bevy of artists and content creators streaming over from X.com. The company has pledged that it has "no intention" of using their posted content to train generative AI.

https://bsky.app/profile/bsky.app/post/3layuzbto2c2x

Read more
Perplexity to introduce sneaky ads alongside its AI answers
Someone holding an iPhone 14 Pro, with Perplexity AI running on it.

It was only a matter of time. "Answer engine" startup Perplexity AI announced on Wednesday that it will begin experimenting with inserting advertisements into its chatbot responses starting next week.

Rather than a standard ad you might be familiar with, however, the platform will instead start showing ads to users in the U.S. in the form of "sponsored follow-up questions and paid media positioned to the side of an answer," from the company's advertising partners. Those include Indeed, Whole Foods, Universal McCann, and PMG.

Read more
Is AI already plateauing? New reporting suggests GPT-5 may be in trouble
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

OpenAI's next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it's been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a "far smaller" improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion "isn’t reliably better than its predecessor [GPT-4] in handling certain tasks," specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

Read more