Skip to main content

Researchers use A.I. to make smiling pet pics — and it’s as creepy as it sounds

NVIDIA Research

Can’t get your dog or that tiger at the zoo to smile for your Instagram? A new artificially intelligent program developed by researchers from Nvidia can take the expression from one animal and put it on the photo of another animal. Called GANimal — after generative adversarial networks, a type of A.I. — the software allows users to upload an image of one animal to re-create the pet’s expression and pose on another animal.

GAN programs are designed to convert one image to look like another, but are typically focused on more narrow tasks like turning horses to zebras. GANimal, however, applies several different changes to the image, adjusting the expression, the position of the animal’s head, and in many cases, even the background, from the inspiration image onto the source image. Unlike most GANs, the program is designed to work with any animal.

Just how well it works, however, is up for debate. One of the sample images shared by the researchers makes a pug look more like a mastiff and a fox look more like a lynx. While some of the sample images have a rather creepy look to them, the research could have important implications on future A.I. research.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

As Nvidia explains, previous programs needed several images of the target animal to work, where the new program needs only one. The researchers call the variation a FUNIT — “a Few-shot UNsupervised Image-to-image Translation” algorithm. The target, or animal the new expression is to be added to, is specified with a small handful of images instead of the mass number usually required to train artificial intelligence programs.

The program learned to mix those expressions onto a new animal in the same way that a lot of people learn — practice. “In this case, we train a network to jointly solve many translation tasks where each task is about translating a random source animal to a random target animal by leveraging a few example images of the target animal,” said Ming-Yu Liu, one of the lead researchers on the project. “Through practicing solving different translation tasks, eventually the network learns to generalize to translate known animals to previously unseen animals.”

The work could lead to real-world uses with additional research, Nvidia suggests, including the creation of live-action movies using easily trainable dogs, then using A.I. to turn those dogs into tigers. But, the work is also an ongoing part of researcher Liu’s goal to use neural networks to give software an “imagination” that’s more human.

The program is available to try out on your own photos at Nvidia’s AI Playground.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Nvidia’s new voice A.I. sounds just like a real person
Nvidia Voice AI

The "uncanny valley" is often used to describe artificial intelligence (A.I.) mimicking human behavior. But Nvidia's new voice A.I. is much more realistic than anything we've ever heard before. Using a combination of A.I. and a human reference recording, the fake voice sounds almost identical to a real one.

All the Feels: NVIDIA Shares Expressive Speech Synthesis Research at Interspeech

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
IBM’s A.I. Mayflower ship is crossing the Atlantic, and you can watch it live
Mayflower Autonomous Ship alone in the ocean

“Seagulls,” said Andy Stanford-Clark, excitedly. “They’re quite a big obstacle from an image-processing point of view. But, actually, they’re not a threat at all. In fact, you can totally ignore them.”

Stanford-Clark, the chief technology officer for IBM in the U.K. and Ireland, was exuding nervous energy. It was the afternoon before the morning when, at 4 a.m. British Summer Time, IBM’s Mayflower Autonomous Ship — a crewless, fully autonomous trimaran piloted entirely by IBM's A.I., and built by non-profit ocean research company ProMare -- was set to commence its voyage from Plymouth, England. to Cape Cod, Massachusetts. ProMare's vessel for several years, alongside a global consortium of other partners. And now, after countless tests and hundreds of thousands of hours of simulation training, it was about to set sail for real.

Read more