Skip to main content

Facebook increasingly using AI to scan for offensive content

facebook ai screen offensive computer male facing dark background
Image used with permission by copyright holder
“Content moderation” might sound like a commonplace academic or editorial task, but the reality is far darker. Internationally, there are more than 100,000 people, many in the Philippines but also in the U.S., whose job each day is to scan online content to screen for obscene, hateful, threatening, abusive, or otherwise disgusting content, according to Wired. The toll this work takes on moderators can be likened to post-traumatic stress disorder, leaving emotional and psychological scars that can remain for life if untreated. That’s a grim reality seldom publicized in the rush to have everyone engaged online at all times.

Twitter, Facebook, Google, Netflix, YouTube, and other companies may not publicize content moderation issues, but that doesn’t mean the companies aren’t working hard to stop the further harm caused to those who screen visual and written content. The objective isn’t only to control the information that shows up on their sites, but also to build AI systems capable of taking over the most tawdry, harmful aspects of the work. Engineers at Facebook recently stated that AI systems are now flagging more offensive content than humans, according to TechCrunch.

Recommended Videos

Joaquin Candela, Facebook’s director of engineering for applied machine learning, spoke about the growing application of AI to many aspects of the social media giant’s business, including content moderation. “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people,” Candela said. “The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.”

Please enable Javascript to view this content

The systems aren’t perfect and mistakes happen. In a 2015 report in Wired, Twitter said when its AI system was tuned to recognize porn 99 percent of the time, 7 percent of those blocked images were innocent and filtered incorrectly, such as when photos of half-naked babies or nursing mothers are screened as offensive. The systems will have to learn, and the answer will be deep-learning, using computers to analyze how they, in turn, analyze massive amounts of data.

Another Facebook engineer spoke of how Facebook and other companies are sharing what they’re learning with AI to cut offensive content. “We share our research openly,” Hussein Mehanna, Facebook’s director of core machine learning, told TechCrunch. “We don’t see AI as our secret weapon just to compete with other companies.”

Bruce Brown
Bruce Brown Contributing Editor   As a Contributing Editor to the Auto teams at Digital Trends and TheManual.com, Bruce…
DeepSeek AI draws ire of spy agency over data hoarding and hot bias
DeepSeek AI chatbot running on an iPhone.

The privacy and safety troubles continue to pile up for buzzy Chinese AI upstart DeepSeek. After having access blocked for lawmakers and federal employees in multiple countries, while also raising alarms about its censorship and safeguards, it has now attracted an official notice from South Korea’s spy agency.

The country’s National Intelligence Service (NIS) has targeted the AI company over excessive collection and questionable responses for topics that are sensitive to the Korean heritage, as per Reuters.

Read more
Amazon plans to spend an estimated $100 billion on AI in 2025
AWS sign in Javitz Center NYC.

AWS sign in Javitz Center NYC. Fionna Agomuoh / Digital Trends

Amazon spent $26.3 billion in capital expenditures during the fourth quarter of 2024, and that is "reasonably representative" of what it plans to spend each quarter of 2025, CEO Andy Jassy said during the company's Q4 earnings call on Thursday. The "vast majority" of that spending will reportedly go towards Amazon Web Services and AI development.

Read more
Turns out, it’s not that hard to do what OpenAI does for less
OpenAI's new typeface OpenAI Sans

Even as OpenAI continues clinging to its assertion that the only path to AGI lies through massive financial and energy expenditures, independent researchers are leveraging open-source technologies to match the performance of its most powerful models -- and do so at a fraction of the price.

Last Friday, a unified team from Stanford University and the University of Washington announced that they had trained a math and coding-focused large language model that performs as well as OpenAI's o1 and DeepSeek's R1 reasoning models. It cost just $50 in cloud compute credits to build. The team reportedly used an off-the-shelf base model, then distilled Google's Gemini 2.0 Flash Thinking Experimental model into it. The process of distilling AIs involves pulling the relevant information to complete a specific task from a larger AI model and transferring it to a smaller one.

Read more