Photographers favor wide-angle lenses to show more of the scene, but favor telephoto lenses for less distortion and more flattering portraits — but what if software could mix the best of both worlds? Researchers from the University of California Santa Barbara and Nvidia recently developed what they’re calling computational zoom, which allows photographers to change the composition after the fact to use more flattering angles or even bring the background closer or farther away from the subject. The researchers presented their work at the SIGGRAPH conference earlier this week.
The technique is a form of computational photography which uses software to create what isn’t possible with hardware alone. To use the technique, the photographer first has to take a series of images, moving further into the scene after each shot. The computer estimates where the camera was when it took each photo. Then, using all of those shots, the software creates a 3D model of the scene. With that model, the user can then pull the background closer or farther away, adjust the position of the foreground or even shorten the distance between the two.
Using the software, the researchers were able to create images that would not have been possible if they had started with a single image. For example, the program can make it appear as though the person in the photo was shot with the flattering look of a telephoto lens, while the background looks like it was shot with a wide-angle lens. The program can bring parts of the scene closer together or farther away, for example, to make something in the background appear more dominant in the image.
“This new framework really empowers photographers by giving them much more flexibility later on to compose their desired shot,” said Pradeep Sen, a UCSB adviser that worked on the project. “It allows them to tell the story they want to tell.”
Photography traditionally has a set number of elements that can be adjusted in post processing and a handful of aspects that cannot be altered after the image is taken. However, computational photography is changing that. For example, focus is traditionally one of the elements that photographers had to get right in-camera. A technique by Panasonic, however, records a 4K video altering the focus through each frame, allowing the focus to be adjusted after the fact using a form of computational photography. Canon also has a form of the technique to create small focus adjustments. And research presented earlier this week uses computational photography to edit images even before the shot.
With the latest research, composition could join the list of previously impossible photo edits that are now not only do-able, but simple to use. Of course, like post focus techniques, the trick only works if you plan to use the process ahead of time — since both options require taking more than a single image.
- How to use focus stacking to get impossibly sharp photos
- 8 spring flower photography tips for budding photographers
- Google unlocks the Pixel 2’s HDR+ skills in third-party apps like Instagram
- How the ‘Kong: Skull Island’ FX team built a bigger, better King
- Awesome Tech You Can’t Buy Yet: Magnetic bike pedals, undersea scooters, and more