Skip to main content

Facebook says white supremacists ‘cannot have a presence’ on the social network

Facebook removed more than 200 white supremacist organizations from its platform for violating both terrorism and hate speech community standards, a Facebook representative told Digital Trends, as part of a broader crackdown on harmful content.

“If you are a member or a leader of one of these groups, you cannot have a presence on Facebook,” Sarah Pollack, a Facebook company spokesperson, said Wednesday. The classifications of “terrorism” and “hate speech” are based on behavior, she said.

“Based on their behavior, some of the white supremacist organizations we’ve banned under our Dangerous Individuals and Organizations policy were banned as terrorist organizations, the others were banned as organized hate groups.”

Previously, Facebook concentrated on ISIS, Al-Qaeda, and their affiliates, and now has expanded the definition, meaning some white supremacist groups are now included.

In addition, Pollack said, “other people cannot post content, supporting praising or representing [those groups]. This is for groups, individuals, as well as attacks that are claimed by these groups.” Pollack also said that mass shooters would fall under this category.

In total, Facebook says it removed more than 22.8 million pieces of content from Facebook and around 4.4 million posts from Instagram in the second and third quarter of 2019 for violating their community standards, according to its fourth-ever Community Standard Enforcement Report that was released Wednesday.

Facebook also said it was able to “proactively detect” more than 96% of the offensive content it took down from Facebook and more than 77% of that from Instagram.

This marked the first time Instagram was included in the Community Standards Report, as well as the first time that Facebook included data on suicide and self-injury.

“We work with experts to ensure everyone’s safety is considered,” Guy Rosen, the vice president of Facebook Integrity, said in a statement. “We remove content that depicts or encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to engage in similar behavior.”

Rosen also wrote that Facebook had “made improvements to our technology to find and remove more violating content,” including expanding data on terrorist propaganda. Facebook said they had identified a wide range of groups as terrorist organizations and had managed to proactively remove, they said, 99% of the content associated with Al-Qaeda, ISIS, and their affiliates.

In addition, they said they had expanded their efforts to proactively detect and remove 98.5% of Facebook content from “all terrorist organizations” beyond just Al-Qaeda and ISIS, and 92.2% of similar posts on Instagram.

The platform also said it removed 11.6 million posts from Facebook in Q3 of 2018 that death with child nudity and exploitation, 99% of which was detected proactively, and significantly higher than the 5.8 million posts removed in Q1 of 2018. On Instagram, they said they removed around 1.3 million posts.

On the illicit sale of drugs and firearms, Facebook also said it had greatly improved its rate of take-down from Q1 to Q3, removing a total of 5.9 million posts from Facebook and around 2.4 million from Instagram.

Editors' Recommendations

Maya Shwayder
I'm a multimedia journalist currently based in New England. I previously worked for DW News/Deutsche Welle as an anchor and…
Bluesky barrels toward 1 million new sign-ups in a day
Bluesky social media app logo.

Social media app Bluesky has picked nearly a million new users just a day after exiting its invitation-only beta and opening to everyone.

In a post on its main rival -- X (formerly Twitter) -- Bluesky shared a chart showing a sudden boost in usage on the app, which can now be downloaded for free for iPhone and Android devices.

Read more
How to make a GIF from a YouTube video
woman sitting and using laptop

Sometimes, whether you're chatting with friends or posting on social media, words just aren't enough -- you need a GIF to fully convey your feelings. If there's a moment from a YouTube video that you want to snip into a GIF, the good news is that you don't need complex software to so it. There are now a bunch of ways to make a GIF from a YouTube video right in your browser.

If you want to use desktop software like Photoshop to make a GIF, then you'll need to download the YouTube video first before you can start making a GIF. However, if you don't want to go through that bother then there are several ways you can make a GIF right in your browser, without the need to download anything. That's ideal if you're working with a low-specced laptop or on a phone, as all the processing to make the GIF is done in the cloud rather than on your machine. With these options you can make quick and fun GIFs from YouTube videos in just a few minutes.
Use GIFs.com for great customization
Step 1: Find the YouTube video that you want to turn into a GIF (perhaps a NASA archive?) and copy its URL.

Read more
I paid Meta to ‘verify’ me — here’s what actually happened
An Instagram profile on an iPhone.

In the fall of 2023 I decided to do a little experiment in the height of the “blue check” hysteria. Twitter had shifted from verifying accounts based (more or less) on merit or importance and instead would let users pay for a blue checkmark. That obviously went (and still goes) badly. Meanwhile, Meta opened its own verification service earlier in the year, called Meta Verified.

Mostly aimed at “creators,” Meta Verified costs $15 a month and helps you “establish your account authenticity and help[s] your community know it’s the real us with a verified badge." It also gives you “proactive account protection” to help fight impersonation by (in part) requiring you to use two-factor authentication. You’ll also get direct account support “from a real person,” and exclusive features like stickers and stars.

Read more