Photo editing software often enhances a smartphone’s limited hardware, but what if those shots could be edited automatically — in real time? That’s the question researchers from the Massachusetts Institute of Technology (MIT) and Google asked when expanding an earlier machine-learning program that automatically improved shots after sending them to the cloud. Presented during this week’s Siggraph digital graphics conference, the new automatic photo software edits so quickly that the user actually sees the results on the screen in real time — before that shot is even taken.
The program is based on earlier MIT research that trained a computer to edit images automatically. Researchers taught that program how to automatically apply specific adjustments by feeding the system five variations of each image, all edited by a different photo editor. After repeating that process a few thousand times, the program learned how to identify and fix common image issues.
The new software is built on the same artificial intelligence platform but speeds up the program to the point where it completes edits in a tenth of the previous time. This allows the live view from the camera to show edits in real time instead of sending the shot to a cloud system for editing. So how did researchers achieve the speed boost? The output of the software actually isn’t an image, but an image formula. Calculating the changes (and only applying them if a photo is taken) speeds up the process.
The original image is also divided into grids, a form of downsizing the image for faster editing without losing pixels in the final image. The program works on each of the divided sections at once, editing hundreds of pixels simultaneously instead of factoring every single pixel.
With those two changes, the program only needs about 100 megabytes to perform each image edit, while without those two modifications the same software needed 12 gigabytes.
The research is a growing trend of looking to computational photography to solve the shortcomings of the hardware that can fit inside of a smartphone. Unlike earlier programs, the automatic photo software from MIT and Google looks to solve one of computational photography’s biggest obstacles, the limited processing power of a mobile device. In the researcher’s experiment, the program applied a high-dynamic range algorithm in real time, boosting the image’s colors and range of light beyond what the hardware itself could achieve.
“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” said Jon Barron, a Google researcher who worked on the project. “Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”
- Google unlocks the Pixel 2’s HDR+ skills in third-party apps like Instagram
- How a blockchain-based digital photo notary is fighting fraud and fake news
- From classic to cloud: How I learned to love Lightroom CC
- 5 features that make awful smartphone cameras a thing of the past
- Deep learning vs. machine learning: what's the difference between the two?