Skip to main content

Social (Net)Work: What can A.I. catch — and where does it fail miserably?

Best social media management tools for small businesses
Panithan Fakseemuang/123RF
Criticism for hate speech, extremism, fake news, and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore social media moderation, looking at what works and what doesn’t, while examining possibilities for improvement.

From a video of a suicide victim on YouTube to ads targeting “Jew haters,” on Facebook, social media platforms are plagued by inappropriate content that manages to slip through the cracks. In many cases, the platform’s response is to implement smarter algorithms to better identify inappropriate content. But what is artificial intelligence really capable of catching, how much should we trust it, and where does it fail miserably?

“A.I. can pick up offensive language and it can recognize images very well. The power of identifying the image is there,” says Winston Binch, the chief digital officer of Deutsch, a creative agency that uses A.I. in creating digital campaigns for brands from Target to Taco Bell. “The gray area becomes the intent.”

A.I. can read both text and images, but accuracy varies

Using natural language processing, A.I. can be trained to recognize text across multiple languages. A program designed to spot posts that violate community guidelines, for example, can be taught to detect racial slurs or terms associated with extremist propaganda.

mobile trends google assistant ai
Image used with permission by copyright holder

A.I. can also be trained to recognize images, to prevent some forms of nudity or recognize symbols like the swastika. It works well in many cases, but it isn’t foolproof. For example, Google Photos was criticized for tagging images of dark-skinned people with the keyword “gorilla.” Years later, Google still hasn’t found a solution for the problem, instead choosing to remove the program’s ability to tag monkeys and gorillas entirely.

Algorithms also need to be updated as a word’s meaning evolves, or to understand how a word is used in context. For example, LGBT Twitter users recently noticed a lack of search results for #gay and #bisexual, among other terms, leading some to feel the service was censoring them. Twitter apologized for the error, blaming it on an outdated algorithm that was falsely identifying posts tagged with the terms as potentially offensive. Twitter said its algorithm was supposed to consider the term in the context of the post, but had failed to do so with those keywords.

A.I. is biased

The gorilla tagging fail brings up another important shortcoming — A.I. is biased. You might wonder how a computer could possibly be biased, but A.I. is trained by watching people complete tasks, or by inputting the results of those tasks. For example, programs to identify objects in a photograph are often trained by feeding the system thousands of images that were initially tagged by hand.

The human element is what makes it possible for A.I. to do tasks but at the same time gives it human bias.

The human element is what makes it possible for A.I. to complete tasks previously impossible on typical software, but that same human element also inadvertently gives human bias to a computer. An A.I. program is only as good as the training data — if the system was largely fed images of white males, for example, the program will have difficulty identifying people with other skin tones.

“One shortcoming of A.I., in general, when it comes to moderating anything from comments to user content, is that it’s inherently opinionated by design,” said PJ Ahlberg, the executive technical director of Stink Studios New York, an agency that uses A.I. for creating social media bots and moderating brand campaigns.

Once a training set is developed, that data is often shared among developers, which means the bias spreads to multiple programs. Ahlberg says that factor means developers are unable to modify those data sets in programs using multiple A.I. systems, making it difficult to remove any biases after discovering them.

A.I. cannot determine intent

A.I. can detect a swastika in a photograph — but the software cannot determine how it is being used. Facebook, for example, recently apologized after removing a post that contained a swastika but was accompanied by a text plea to stop the spread of hate.

This is an example of the failure of A.I. to recognize intent. Facebook even tagged a picture of the statue of Neptune as sexually explicit. Additionally, algorithms may unintentionally flag photojournalistic work because of hate symbols or violence that may appear in the images.

Historic images shared for educational purposes are another example — in 2016, Facebook caused a controversy after it removed the historic “napalm girl” photograph multiple times before pressure from users forced the company to change its hardline stance on nudity and reinstate the photo.

A.I. tends to serve as an initial screening, but human moderators are often still needed to determine if the content actually violates community standards. Despite improvements to A.I., this isn’t a fact that is changing. Facebook, for example, is increasing the size of its review team to 20,000 this year, double last year’s count.

A.I. is helping humans work faster

A human brain may still be required, but A.I. has made the process more efficient. A.I. can help determine which posts require a human review, as well as help prioritize those posts. In 2017, Facebook shared that A.I. designed to spot suicidal tendencies had resulted in 100 calls to emergency responders in one month. At the time, Facebook said that the A.I. was also helping determine which posts see a human reviewer first.

Facebook Concerned Friend
Getty Images/Blackzheep
Getty Images/Blackzheep

“[A.I. has] come a long way and its definitely making progress, but the reality is you still very much need a human element verifying that you are modifying the right words, the right content, and the right message,” said Chris Mele, the managing director at Stink Studios. “Where it feels A.I. is working best is facilitating human moderators and helping them work faster and on a larger scale. I don’t think A.I. is anywhere near being 100 percent automated on any platform.”

A.I. is fast, but the ethics are slow

Technology, in general, tends to grow at a rate faster than laws and ethics can keep up — and social media moderation is no exception. Binch suggests that that factor could mean an increased demand for employees with a background in humanities or ethics, something most programmers don’t have.

As he put it, “We’re at a place now where the pace, the speed, is so fast, that we need to make sure the ethical component doesn’t drag too far behind.”

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
How to create multiple profiles on a Facebook account
A series of social media app icons on a colorful smartphone screen.

Facebook (and, by extension, Meta) are particular in the way that they allow users to create accounts and interact with their platform. Being the opposite of the typical anonymous service, Facebook sticks to the rule of one account per one person. However, Facebook allows its users to create multiple profiles that are all linked to one main Facebook account.

In much the same way as Japanese philosophy tells us we have three faces — one to show the world, one to show family, and one to show no one but ourselves — these profiles allow us to put a different 'face' out to different aspects or hobbies. One profile can keep tabs on your friends, while another goes hardcore into networking and selling tech on Facebook Marketplace.

Read more
How to set your Facebook Feed to show most recent posts
A smartphone with the Facebook app icon on it all on a white marble background.

Facebook's Feed is designed to recommend content you'd most likely want to see, and it's based on your Facebook activity, your connections, and the level of engagement a given post receives.

But sometimes you just want to see the latest Facebook posts. If that's you, it's important to know that you're not just stuck with Facebook's Feed algorithm. Sorting your Facebook Feed to show the most recent posts is a simple process:

Read more
How to go live on TikTok (and can you with under 1,000 followers?)
Tik Tok

It only takes a few steps to go live on TikTok and broadcast yourself to the world:

Touch the + button at the bottom of the screen.
Press the Live option under the record button.
Come up with a title for your live stream. 
Click Go Live to begin.

Read more