Skip to main content

Neural network algorithm turns video into Prisma-style animated paintings

Last week, filmmakers wowed the Internet with their hand-built time-lapse video that used Prisma to make it look like an animation rather than a series of photographs. Turns out, you don’t need to spend hours working with single photos, one at a time inside Prisma to achieve this effect — though you will need a powerful computer. Danil Krivoruchko, a digital artist, ran an iPhone video through a neural network algorithm that automatically painted over every frame, and the result is simply stunning.

The video, titled NYC Flow, was shot at 240 frames per second, producing a slow motion output that already gave it a dreamy feel to begin with. From there, Krivoruchko ran it through the algorithm, which completely transformed it into a living painting.

The algorithm itself comes from Manuel Ruder and his team, who published it in a paper at the University of Freiburg in Germany. It is able to copy a style from an input source, like a painting, and apply it to a video. It is, in fact, very similar technology to what’s employed by Prisma, except that the app doesn’t handle video yet.

Should you be interested in trying out this technique, head on over to Ruder’s GitHub where the open source code can be downloaded. Take heed, however, as you will need a powerful computer running Ubuntu with a hefty graphics card. Four gigabytes of video memory are required just to output a video with a resolution of 450 x 350 pixels.

The purpose for building the algorithm was to create automated rotoscoping to replace the time-intensive tasks of having human artists hand-paint every frame to turn it into an animation. Rotoscoping is an old technique, with perhaps its most notable use being in the film A Scanner Darkly.

NYC Flow is part of Krivoruchko’s Deep Slow Flow project on Instagram, which explores the idea of using neural network code in filmmaking.

Editors' Recommendations