Skip to main content

Can computers draw? Google just taught a neural network to sketch

google sketch rnn research googlesketch
The doodles above were created by Sketch RNN and overlaid on a stock photo. Google Research / Igor Stevanovic / 123RF
Google’s AI can now identify your rough sketches — but the research team is also teaching computers to draw their own doodles. On April 13, the Google research team shared their latest work involving a neural network capable of drawing its own sketches, currently being called Sketch RNN. Unlike AutoDraw, Sketch RNN is a recently published research paper and is not yet available to the public.

The system is based on human sketches, but doesn’t imitate the human doodles exactly, Google Research says, instead creating its own new drawings. While the system still starts with a sketch, the program reconstructs the original drawing in a unique way. That’s because the team deliberately added noise between the encoder and decoder, so that the computer can’t recall the exact sketch.

The goal isn’t to teach computers to copy a drawing, but to see if neural networks are in fact capable of creating their own drawings. To do that, Google researchers David Ha and Douglas Eck used the process of creating 70,000 drawings to teach the system using the motor sequences, like direction and when to lift the pen, instead of simply inputting thousands of already completed drawings.

While the system’s cat drawings don’t look much better than a preschooler’s, sketch RNN is capable of creating a unique sketch. To test the program’s ability to draw a unique cat, the team also feeds it data of odd drawings, like a three-eyed cat. “When we feed in a sketch of a three-eyed cat, the model generates a similar looking cat that has two eyes instead, suggesting that our model has learned that cats usually only have two eyes,” Ha wrote.

If the input sketch is actually of a different object but still paired with the word cat, the program still creates a cat, though the overall shape mimics the original drawing. For example, when the researchers drew a truck and told the computer it was a pig, they got a truck-shaped pig.

So what’s the real world application? Google Research says the program can help designers quickly generate a large number of unique sketches. Eventually, the program could also be used to teach drawing, learn more about the way humans sketch, or to finish incomplete drawings.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Google is monitoring animal populations with a giant network of wildlife cameras
Using AI to find where the wild things are

Wildlife Insights: Saving Biodiversity with Tech and AI

A new artificial intelligence program led by Google and Conservation International streamlines wildlife conservation monitoring to better protect animals. 

Read more
New ‘shady’ research from MIT uses shadows to see what cameras can’t
mit csail blind inverse light

Computational Mirrors: Revealing Hidden Video

Artificial intelligence could soon help video cameras see lies just beyond what the lens can see -- by using shadows. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have concocted an algorithm that “sees” what’s out of the video frame by analyzing the shadows and shading that out-of-view objects create. The research, Blind Inverse Light Transport by Deep Matrix Factorization, was published today, Dec. 6.

Read more
A.I. can remove distortions from underwater photos, streamlining ocean research
nasa coral reef climate change lush tropical shore and corals underwater

Light behaves differently in water than it does on the surface -- and that behavior creates the blur or green tint common in underwater photographs as well as the haze that blocks out vital details. But thanks to research from an oceanographer and engineer and a new artificial intelligence program called Sea-Thru, that haze and those occluded colors could soon disappear.

Besides putting a downer on the photos from that snorkeling trip, the inability to get an accurately colored photo underwater hinders scientific research at a time when concern for coral and ocean health is growing. That’s why oceanographer and engineer Derya Akkaynak, along with Tali Treibitz and the University of Haifa, devoted their research to developing an artificial intelligence that can create scientifically accurate colors while removing the haze in underwater photos.

Read more