Skip to main content

Google's video AI was tricked into thinking a video about apes was about spaghetti

researchers trick google cloud video intelligence into misclassifying videos data center header
Google
While artificial intelligence is an incredibly important field that’s growing by leaps and bounds, perhaps its most interesting lessons concerns just how incredible the human brain is at performing certain functions. While computers might be better at performing math and looking dozens of chess moves into the future, they can’t yet compete with the human brain at figuring out things like a video’s topic.

A recent research project demonstrated just that fact by feeding videos to Google’s Cloud Video Intelligence API and seeing if it could determine exactly what a given video was about. Apparently, this seemingly simple task is a challenge for Google’s AI and points out the difficulty of creating automatic systems to categorize video, as Motherboard reports.

The research team in question works at the University of Washington, and the team used some trickery to see how smart the Google API really is. Currently in beta, the Google Cloud Video Intelligence API has one job, which was to “make video searchable” and to annotate video to make it easier for humans to search through them.

In their tests, the researchers injected extraneous, and subliminal, images of a pasta bowl into a video featuring primatologist Jane Goodall and gorillas. The result was that the Google AI concluded that the video was actually about spaghetti and not the apes. Another example involved placing a picture of an Audi into a video about tigers, which caused the AI to conclude that the video was about cars.

University of Washington

Although it might sound somewhat comical, these mistakes point out a serious issue with the AI. As the researchers noted in their conclusion:

“However, we showed that the API has certain security weaknesses. Specifically, an adversary can insert an image, periodically and at a very low rate, into the video in a way that all the generated shot labels are about the inserted image. Such vulnerability seriously undermines the applicability of the API in adversarial environments.”

Even worse, according to the researchers, “Furthermore, an adversary can bypass a video filtering system by inserting a benign image into a video with illegal contents.” The fact that the process of doing so requires no specialized knowledge about the AI’s machine learning algorithms or about video annotation in general was particularly disturbing.

Ultimately, what the research points out is that AI has a long way to go before it can match the human brain in determining things like a video’s topic. Inserting subliminal messages into video has been known for a long time to affect the human psyche, but at least a human wouldn’t think that a video about apes is actually about spaghetti — the human would probably just start craving pasta instead.

Editors' Recommendations

Mark Coppock
Mark has been a geek since MS-DOS gave way to Windows and the PalmPilot was a thing. He’s translated his love for…
Google’s AI image-detection tool feels like it could work
An AI image of the Pope in a puffy coat.

Google announced during its I/O developers conference on Wednesday its plans to launch a tool that will distinguish whether images that show up in its search results are AI-generated images.

With the increasing popularity of AI-generated content, there is a need to confirm whether the content is authentic -- as in created by humans -- or if it has been developed by AI.

Read more
You don’t have to use Bing – Google Search has AI now, too
Google Search Experience gives an overview with links and images.

Google Search Experience gives an overview with links and images. Google

Google is rolling out big changes to its top product, Google Search, adding generative AI capabilities. That means you don't have to switch to Bing to get a more helpful AI-enhanced search.

Read more
‘Godfather of AI’ quits Google to speak more freely on concerns
google deepmind collaboration head and neck cancer treatment artificial intelligence

Artificial intelligence pioneer Geoffrey Hinton surprised many on Monday when he revealed he'd quit his job at Google where he worked for the last decade on AI projects.

Often referred to as “the godfather of AI” for his groundbreaking work that underpins many of today's AI systems, British-born Hinton, now 75, told the New York Times that he has serious concerns about the speed at which the likes of Open AI with its ChatGPT tool, and Google with Bard, are working to develop their products, especially as it could be at the cost of safety.

Read more