Skip to main content

Facebook’s revenge porn prevention tool now includes human review

After testing a tool to fight revenge porn in Australia last year, Facebook is expanding the pilot program and is including human reviewers. On Tuesday, May 22, Facebook shared an expanded test of the tool created in conjunction with a number of organizations that allows users to upload their own sensitive photos privately to prevent someone else from uploading the images publically. The change most notably uses a “specially trained” team member to review the report, while Facebook said the initial test used artificial intelligence. While the initial test included Australia and three unannounced countries, Facebook shared that the tool is now also being tested in the U.S, Canada, and the U.K in addition to Australia.

The social media giant is suggesting that those who consider themselves vulnerable to such tactics pre-emptively upload their images to the social network. While it might seem counter-intuitive, such a solution would let Facebook, and by extension, the uploader, get ahead of the problem by creating a hash of the image. A hash, Facebook explains, is like a fingerprint of the image that allows software to keep other copies off the platform without actually permanently keeping a copy of the image on Facebook servers.

Revenge pornography has been a growing problem for years, especially on Facebook, which has tools to remove the image only after it’s been shared. Facebook is looking to do something much more proactive to prevent the practice in the future, and hopes that people who have the potential to be affected will trust its A.I.-driven system to combat it.

The theory behind the new technique is that someone who knows that there are nude or compromising images of them in the hands of someone who may upload those images can block them from being uploaded. By uploading the image to Facebook privately, the social network can “hash” the media, effectively marking duplicates of that image for immediate takedown should someone else attempt to upload them. Developed in conjunction with a number of non-profits focusing on women’s rights and domestic abuse, the service will apply across all Facebook platforms, including Facebook itself, its Messenger application and Instagram.

Users can go through the organization(s) in their country working with Facebook on the tool to receive a form. In the U.S. those organizations are the Cyber Civil Rights Initiative, the National Network to End Domestic Violence. Other organizations include the Australia e-Safety Commissioner, the U.K. Revenge Porn Helpline, and YWCA Canada.

The tool allows someone fearing potential revenge porn to submit a form. Facebook then sends that user a secure upload link for the image or images. Facebook will then create the hash and delete the actual image from the servers within seven days.

While in the initial tests, Facebook said no human actually viewed the photos and that AI created the hash, Facebook now says one trained staff member will review the report. “One of a handful of specifically trained members of our Community Operations Safety Team will review the report and create a unique fingerprint, or hash, that allows us to identify future uploads of the images without keeping copies of them on our servers,” Facebook’s head of global safety, Antigone Davis, wrote in a post.

The question is whether users trust Facebook enough in the wake of Cambridge Analytica and a bug that saved unpublished videos to upload sensitive images. The tool is still only in testing stages. If the scheme proves successful, it may be rolled out the world over.

Updated on May 23: Added information regarding the expanded test.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
Facebook admits to Messenger Kids security flaw but insists it’s fixed
unicef global innovations children youth summit kids using a tablet

Facebook missed a troubling design flaw in its Messenger Kids app that allowed children to communicate with users who hadn’t been approved by their parents.

The social networking giant launched the app in 2017, touting it as a way for children under 13 to “safely video chat and message with family and friends.” Parents set up Messenger Kids by authorizing it through their own Facebook account and then selecting the users with whom they’re happy for their child to connect.

Read more
The U.S. Senate really doesn’t like Facebook’s Libra cryptocurrency plans
David Marcus appears before Senate Banking Committee

David Marcus, head of Facebook's Calibra, testifies during a hearing before the Senate Banking Committee on July 16.

Facebook's Libra cryptocurrency had its first big regulatory test on Tuesday when Calibra head David Marcus appeared before the Senate Banking Committee. It didn’t go well.

Read more
Revenge porn using deepfakes can now result in jail time in Virginia
malwarebytes laptop

Virginia has updated its revenge porn law to make it an offense to share deepfake photos and videos of people without their consent.

Such content uses machine learning to create fake videos that can sometimes look highly realistic. In other words, it can appear as if someone did something they didn’t do. The software used to create deepfakes is growing increasingly sophisticated, making it harder to tell whether or not the material is genuine.

Read more