Social (Net)Work: What can A.I. catch — and where does it fail miserably?

social media moderation
Panithan Fakseemuang/123RF
Criticism for hate speech, extremism, fake news, and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore social media moderation, looking at what works and what doesn’t, while examining possibilities for improvement.

From a video of a suicide victim on YouTube to ads targeting “Jew haters,” on Facebook, social media platforms are plagued by inappropriate content that manages to slip through the cracks. In many cases, the platform’s response is to implement smarter algorithms to better identify inappropriate content. But what is artificial intelligence really capable of catching, how much should we trust it, and where does it fail miserably?

“A.I. can pick up offensive language and it can recognize images very well. The power of identifying the image is there,” says Winston Binch, the chief digital officer of Deutsch, a creative agency that uses A.I. in creating digital campaigns for brands from Target to Taco Bell. “The gray area becomes the intent.”

A.I. can read both text and images, but accuracy varies

Using natural language processing, A.I. can be trained to recognize text across multiple languages. A program designed to spot posts that violate community guidelines, for example, can be taught to detect racial slurs or terms associated with extremist propaganda.

mobile trends google assistant ai

A.I. can also be trained to recognize images, to prevent some forms of nudity or recognize symbols like the swastika. It works well in many cases, but it isn’t foolproof. For example, Google Photos was criticized for tagging images of dark-skinned people with the keyword “gorilla.” Years later, Google still hasn’t found a solution for the problem, instead choosing to remove the program’s ability to tag monkeys and gorillas entirely.

Algorithms also need to be updated as a word’s meaning evolves, or to understand how a word is used in context. For example, LGBT Twitter users recently noticed a lack of search results for #gay and #bisexual, among other terms, leading some to feel the service was censoring them. Twitter apologized for the error, blaming it on an outdated algorithm that was falsely identifying posts tagged with the terms as potentially offensive. Twitter said its algorithm was supposed to consider the term in the context of the post, but had failed to do so with those keywords.

A.I. is biased

The gorilla tagging fail brings up another important shortcoming — A.I. is biased. You might wonder how a computer could possibly be biased, but A.I. is trained by watching people complete tasks, or by inputting the results of those tasks. For example, programs to identify objects in a photograph are often trained by feeding the system thousands of images that were initially tagged by hand.

The human element is what makes it possible for A.I. to do tasks but at the same time gives it human bias.

The human element is what makes it possible for A.I. to complete tasks previously impossible on typical software, but that same human element also inadvertently gives human bias to a computer. An A.I. program is only as good as the training data — if the system was largely fed images of white males, for example, the program will have difficulty identifying people with other skin tones.

“One shortcoming of A.I., in general, when it comes to moderating anything from comments to user content, is that it’s inherently opinionated by design,” said PJ Ahlberg, the executive technical director of Stink Studios New York, an agency that uses A.I. for creating social media bots and moderating brand campaigns.

Once a training set is developed, that data is often shared among developers, which means the bias spreads to multiple programs. Ahlberg says that factor means developers are unable to modify those data sets in programs using multiple A.I. systems, making it difficult to remove any biases after discovering them.

A.I. cannot determine intent

A.I. can detect a swastika in a photograph — but the software cannot determine how it is being used. Facebook, for example, recently apologized after removing a post that contained a swastika but was accompanied by a text plea to stop the spread of hate.

This is an example of the failure of A.I. to recognize intent. Facebook even tagged a picture of the statue of Neptune as sexually explicit. Additionally, algorithms may unintentionally flag photojournalistic work because of hate symbols or violence that may appear in the images.

Historic images shared for educational purposes are another example — in 2016, Facebook caused a controversy after it removed the historic “napalm girl” photograph multiple times before pressure from users forced the company to change its hardline stance on nudity and reinstate the photo.

A.I. tends to serve as an initial screening, but human moderators are often still needed to determine if the content actually violates community standards. Despite improvements to A.I., this isn’t a fact that is changing. Facebook, for example, is increasing the size of its review team to 20,000 this year, double last year’s count.

A.I. is helping humans work faster

A human brain may still be required, but A.I. has made the process more efficient. A.I. can help determine which posts require a human review, as well as help prioritize those posts. In 2017, Facebook shared that A.I. designed to spot suicidal tendencies had resulted in 100 calls to emergency responders in one month. At the time, Facebook said that the A.I. was also helping determine which posts see a human reviewer first.

Facebook Concerned Friend
Getty Images/Blackzheep
Getty Images/Blackzheep

“[A.I. has] come a long way and its definitely making progress, but the reality is you still very much need a human element verifying that you are modifying the right words, the right content, and the right message,” said Chris Mele, the managing director at Stink Studios. “Where it feels A.I. is working best is facilitating human moderators and helping them work faster and on a larger scale. I don’t think A.I. is anywhere near being 100 percent automated on any platform.”

A.I. is fast, but the ethics are slow

Technology, in general, tends to grow at a rate faster than laws and ethics can keep up — and social media moderation is no exception. Binch suggests that that factor could mean an increased demand for employees with a background in humanities or ethics, something most programmers don’t have.

As he put it, “We’re at a place now where the pace, the speed, is so fast, that we need to make sure the ethical component doesn’t drag too far behind.”

Emerging Tech

Awesome Tech You Can’t Buy Yet: 1-handed drone control, a pot that stirs itself

Check out our roundup of the best new crowdfunding projects and product announcements that hit the web this week. You may not be able to buy this stuff yet, but it sure is fun to gawk!
Mobile

The 100 best Android apps turn your phone into a jack-of-all-trades

Choosing which apps to download is tricky, especially given how enormous and cluttered the Google Play Store has become. We rounded up 100 of the best Android apps and divided them neatly, with each suited for a different occasion.
Mobile

Which smartphone has the best camera? We found the sharpest shooters

They say that the best camera is always the one you have with you and that makes your smartphone camera very important indeed. Join us for a closer look at the best camera phones available right now.
News

Social media use increases depression and anxiety, experiment shows

A study has shown for the first time a causal link between social media use and lower rates of well-being. Students who limited their social media usage to 30 minutes a day showed significant decreases in anxiety and fear of missing out.
Social Media

Dine and dash(board): Make a Yelp reservation from your car’s control panel

Already in the car, but can't decide where to eat? Yelp Reservations can now be added to some dashboard touchscreens. Yelp Reservations searches for restaurants within 25 miles of the vehicle's location.
Computing

Hackers sold 120 million private Facebook messages, report says

Up to 120 million private Facebook messages were being sold online by hackers this fall. The breach was first discovered in September and the messages were obtained through unnamed rogue browser extensions. 
Social Media

Facebook opens pop-up stores at Macy’s, but they’re not selling the Portal

Facebook has opened pop-up stores at multiple Macy's, though they're not selling Facebook's new Portal device. Instead, they're showcasing small businesses and brands that are already popular on Facebook and Instagram.
Web

Switch up your Reddit routine with these interesting, inspiring, and zany subs

So you've just joined the wonderful world of Reddit and want to explore it. With so many subreddits, however, navigating the "front page of the internet" can be daunting. You're in luck -- we've gathered 23 of the best subreddits to help…
Social Media

Facebook Messenger will soon let you delete sent messages

A feature coming to Facebook Messenger will let you delete a message for up to 10 minutes after you send it. The company promised the feature months ago and this week said it really is on its way ... "soon."
Social Media

Pinterest brings followed content front and center with full-width Pin format

Want to see Pinterest recommendations, or just Pins from followed users? Now Pinners can choose with a Pinterest Following feed update. The secondary feed eliminates recommendation and is (almost) chronological.
Smart Home

Facebook's Alexa-enabled video-calling devices begin shipping

Facebook's Portal devices are video smart speakers with Alexa voice assistants built in that allow you to make calls. The 15-inch Portal+ model features a pivoting camera that follows you around the room as you speak.
Social Media

Vine fans, your favorite video-looping app is coming back as Byte

Vine fans were left disappointed in 2017 when its owner, Twitter, pulled the plug on the video-looping app. But now one of its co-founders has promised that a new version of the app, called Byte, is coming soon.
Social Media

Twitter boss hints that an edit button for tweets may finally be on its way

Twitter has been talking for years about launching an edit button for tweets, but it still hasn't landed. This week, company boss Jack Dorsey addressed the matter again, describing a quick-edit button as "achievable."
Social Media

‘Superwoman’ YouTuber Lilly Singh taking a break for her mental health

Claiming to be "mentally, physically, emotionally, and spiritually exhausted," popular YouTuber Lilly Singh has told her millions of fans she's taking a break from making videos in order to recuperate.