Social media sites like Facebook and Twitter are constantly testing, experimenting, and patenting new features. From the strange and mundane to the downright infuriating, the tools being tweaked don’t always seem like they’ll be of any practical use to the general public.
Facebook’s latest test, however, may actually result in a genuinely useful support function. The social network is reportedly testing a feature that will automatically alert you if it discovers another user is impersonating your account by using your name and photo, reports Mashable.
This is how it works; when Facebook detects that another profile matches yours, it will send you a notification about the suspected imposter. You’ll then be prompted to identify if the selected profile is in fact impersonating you by using your personal information, or if it belongs to another user who is not in fact an imposter.
Part automation, part human support system, the function will see a dedicated support team manually review any impersonations. According to Facebook, the feature has been under review since November and has now been rolled out to 75 percent of the world, with further expansion on the cards.
The function makes complete sense as a part of Facebook’s increased support efforts for its diverse global community. Most recently, the platform introduced a new “fake name” reporting system aimed at addressing the issues of LGBTQ users, non-western individuals, and cases of stalking and abuse. This latest test feature likely falls into the latter camp, as the social network’s real name policy requires people to use an authentic name.
Facebook is also testing two other features, including new ways of reporting non-consensual intimate images and a photo checkup function. The latter helps users through the privacy settings for their images, and is reportedly already live in India and parts of South America, Africa, and Southeast Asia.
The concept of “non-consensual intimate images,” as Facebook has so formally termed them, basically refers to nude photos that you didn’t condone being posted online. These types of images have been banned on Facebook since 2012, but the new test feature is aimed at making the reporting experience more sympathetic for victims of abuse. In accordance with the test, when a person identifies a non-consensual intimate image, they will also have the option of notifying Facebook of whether the image is of them. If that is the case, the platform will provide the victim with external links to support groups of those suffering abuse, and even possible legal options.
We have reached out to Facebook for a statement, and will update the article accordingly.