Skip to main content

Facebook used a screenshot of a death threat to promote Instagram

1305060 autosave v1 2 37020033 m
Panithan Fakseemuang / 123RF
Score another point for human oversight — Facebook algorithms are once again being criticized for inadvertently promoting an inappropriate post. In an auto-generated Facebook post for Instagram, the algorithm promoted a screenshot of an email containing a death threat as part of the effort to encourage users to try out Instagram.

Instagram “notification posts,” as the company calls them, on Facebook are pretty common. An algorithm chooses a photo based on the number of interactions, then promotes the image on Facebook along with a list of all that user’s friends who are also on Instagram to help encourage Facebook users to join both platforms. When that same algorithm chose one of reporter Olivia Solon’s most popular posts, it was this one (be warned, the post contains graphic language):

Instagram is using one of my most "engaging" posts to advertise its service to others on Facebook ????

— Olivia Solon (@oliviasolon) September 21, 2017

Solon, a reporter for the Guardian, took the screenshot to demonstrate the type of hate mail she receives because of her job. The post resulted in several comments (though only a few likes), which was apparently enough for the algorithm to decide it was a good shot to use to promote Instagram to Solon’s Facebook friends.

Instagram apologized to Solon and said the post was not a paid promotion, but rather a notification post “designed to encourage engagement on Instagram.” According to Instagram, these types of notification posts are only viewed by a small percentage of a Facebook user’s friends.

The incident follows Facebook’s public apology after an organization discovered that promoted posts could be targeted to demographics like “Jew haters.” Facebook temporarily disabled the feature until the issue was corrected and said the company would be adding more human oversight to the process.

Like the automatically generated Instagram notification post, the demographic issue was a result of a lack of human oversight, allowing users to type whatever they want into the bio sections. More than 2,000 people typed “Jew hater” into one of the education fields, which meant the demographic was available for advertisers to choose from when creating a targeted post.

Facebook isn’t oblivious to the problem. Earlier in 2017, the company released a series of blog posts called “Hard Questions” that discusses how the social media platform handles topics such as hate speech and discrimination and what the company is doing to improve. Along with improving current algorithms, the company said at the time that they would be adding 3,000 employees before the end of the year to help catch what the computer misses.

Editors' Recommendations