Skip to main content

New Zealand attack shows that as A.I. filters get smarter, so do violators

Facebook, YouTube, and Twitter are quick to share stats on how artificial intelligence filters are improving, but the aftermath of last week’s shooting in Christchurch, New Zealand made gaps in the system terrifyingly obvious. Videos shot from the shooter’s point of view were uploaded to social media in numbers at least in the hundred thousand range after a copy of the original was posted to a file-sharing platform called 8chan.

The attack on two mosques in Christchurch left 50 dead and another 50 wounded, according to the authorities. The 28-year-old shooter wore a helmet-mounted camera and livestreamed the shootings in a way that some are saying was “designed for maximum spread on social media.” YouTube’s chief product officer, Neal Mohan, says the shooting was uploaded faster, with more videos, than previous incidents.

Three days later, and social media platforms were still struggling to keep copies of the 17-minute video off the networks. YouTube temporarily erred on the side of caution and disabled the human review part of the process meant to identify videos falsely mislabeled by the platform’s A.I. system as a violation of terms. The change is still ongoing, and YouTubers who believe their videos were miscategorized are encouraged to file for reinstatement. Some search functions also remain disabled.

While YouTube didn’t share exact numbers of the uploads, Facebook says it removed 1.5 million. About 80 percent of those, 1.2 million, were blocked at upload before ever making it onto the platform, while another 300,000 were removed within the first 24 hours after posting. The live video saw fewer than 200 views, the company said, and the video uploaded by the shooter saw around 400 views before being removed. No users reported the video until 29 minutes after the shooter started livestreaming. The platform deleted the suspected shooter’s account on both Facebook and Instagram and banned the user.

Some of social media’s past efforts to keep violence and hate out worked — like the 1.3 million videos that never uploaded to Facebook. YouTube’s previous mistakes that allowed violent videos into the search suggestions didn’t appear to come into play, and users searching for the incident were redirected towards news coverage of the incident instead.

But as the videos numbering in the hundreds of thousands show, the A.I. designed to recognize offending videos isn’t foolproof. Many networks use what’s called hashing to prevent mass uploads by recognizing when the same video is uploaded more than once. But according to The Washington Post, some users have been able to bypass the hashing by shortening the video, adding logos, or even using an effect that made the real-life event look like a video game. While the original video was removed, the networks struggled with “remixes” of the livestream.

Facebook expanded its hashing technology to try to catch more variations on the video, adding audio hashing to the process. Networks that are part of with the Global Internet Forum to Counter Terrorism — which includes Facebook, YouTube, Twitter, and Microsoft — added variations of the video to a database, allowing other networks to prevent the same uploads. Facebook says the group together added around 800 variations of the video to the database.

“This was a tragedy that was almost designed for the purpose of going viral. We’ve made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done,” Mohan told The Washington Post.

Facebook didn’t comment on why 20 percent of videos that went live on the network weren’t caught at upload. “We continue to work around the clock to remove violating content using a combination of technology and people,” Facebook New Zealand representative Mia Garlick stated in a tweet. “Out of respect for the people affected by this tragedy and the concerns of local authorities, we’re also removing all edited versions of the video that do not show graphic content.”

Reddit and Twitter also removed related content from their platforms, but didn’t share related statistics. “We are continuously monitoring and removing any content that depicts the tragedy, and will continue to do so in line with the Twitter rules,” Twitter Safety tweeted. “We are also in close coordination with New Zealand law enforcement to help in their investigation.”

Updated on March 19 with additional details from Facebook.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
YouTube launches fresh new homepage on iOS and Android tablets and desktops
youtube to remove more hateful and supremacist content going forward logo phone

Get ready to see YouTube in a fresh new light. Interface updates are rolling out across desktop and Android and iOS tablets, and both platforms will also be getting some great new tools to facilitate your YouTube addiction.

The goals of the changes are to make finding new videos easier, as well as pinpointing new content from creators you love. As such, video thumbnails have been made larger and video titles have been given more room to allow for longer, more descriptive titles. In addition, channel icons are also now visible, making it easy to quickly identify a video from a favorite channel.

Read more
A new Senate bill would fundamentally change the internet as we know it
Josh Hawley

A new bill in the U.S. Senate could cause the internet as we know it to cease to exist by holding major tech companies like Facebook or YouTube liable for anything posted on their platforms. 

On Wednesday, Senator Josh Hawley (R-MO) introduced controversial legislation that would amend Section 230 of the Communications Decency Act (CDA). Known as the Ending Support for Internet Censorship Act, it has caused bipartisan backlash on how it would affect tech companies, content creators, and everyday users. 

Read more
WhatsApp now lets you send self-destructing voice messages
WhatsApp logo on a phone.

If you’re on WhatsApp and regularly make use of the view once feature for photo and video messages, then you might be interested to learn that the feature has now been expanded to voice messages.

WhatsApp’s view once feature does what it says, deleting a message after it’s been viewed a single time. It’s been available for photos and videos since 2021, but now you can also send voice messages that can only be played once before they, too, disappear from the app.

Read more