Skip to main content

A.I. could help cameras see in candlelight, research suggests

CVPR 2018: Learning to See in the Dark

Low-light photography is a balance between blur and noise — but what if artificial intelligence could even out the score? Researchers from the University of Illinois Urbana-Champaign and Intel have trained a program to process low-noise images in a room lit with a single candle. By feeding a RAW image processor two identical shots, one a short exposure and one a long exposure, the group managed to get an image with less noise and without the odd color casts alternative methods provide. With additional research, the processing algorithms could help cameras take images with less noise without using a longer shutter speed.

To train what the group calls the See-in-the-Dark data set, the researchers took two different images in limited light. Using a remote app to control the camera without touching it, the group took a properly exposed long exposure image from ten to 30 seconds long. The researchers then took a second image with a short exposure from  0.1 to 0.03 seconds long, which typically created an image that was almost entirely black.

Repeating this process around 5,000 different times, some with a Sony a7S II and some with a Fujifilm X-T2, the researchers then used the paired images to train a neural network. The images were first processed by separating into different color channels, removing the black and reducing the image’s resolution. The data set also used RAW data from the camera sensor, not processed JPEGs.

The algorithms created from the data set, when used on RAW sensor data, created brighter images with less noise compared to other methods of handling the camera data, like demosaicing. The resulting images also had a more accurate white balance than current methods. The results improve on traditional image processing, the researchers said, and warrant more research.

The enhanced processing method could help smartphones perform between in low light, along with enhancing handheld shots from DSLRs and mirrorless cameras, the group suggests. Video could also benefit, since taking a longer exposure isn’t possible while maintaining a standard frame rate.

While the sample images from the program are impressive, the processing was only tested on stationary subjects. Image processing was also slower than current standards — the images took 0.38 and 0.66 seconds to process at a reduced resolution, too slow to maintain the burst speeds on current cameras. The group’s data set was also designed for a specific camera sensor — without additional research on data sets for multiple sensors, the process would have to be repeated for each new camera sensor. The researchers suggested that future research could look into those limitations.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
This new entry-level Celeron chip could still beat the Core i9-10900K
Intel Alder Lake pin layout.

Intel Celeron G6900 was recently added to the Intel Alder Lake family alongside several other processors. The CPU, sporting just two cores, is the most entry-level option to be found in Intel's 12th-gen lineup.

Although it's a budget CPU, the Celeron G6900 still managed to outperform the high-end Intel Core i9-10900K in a single-core test.

Read more
How Nvidia is using A.I. to help Domino’s deliver pizzas faster
Domino's delivery in line.

Nvidia announced a new tool that can help deliver your pizzas faster -- yes, really -- at its fall GTC 2021 event. It's called ReOpt, and it's a real-time logistics tool that Domino's is already using to optimize delivery routes based on time and cost.

ReOpt is a set of logistics-planning algorithms that can find billions of routes to the same location. It utilizes heuristics powered by GPU computing to route vehicles in the most efficient way possible. It's like Google Maps, just way more complex and designed specifically to meet the needs of last-mile delivery.

Read more
The funny formula: Why machine-generated humor is the holy grail of A.I.
microphone in a bar

In "The Outrageous Okona," the fourth episode of the second season of Star Trek: The Next Generation, the Enterprise's resident android Data attempts to learn the one skill it has previously been unable to master: Humor. Visiting the ship’s Holodeck, Data takes lessons from a holographic comedian to try and understand the business of making funny.

While the worlds of Star Trek and the real world can be far apart at times, this plotline rings true for machine intelligence here on Earth. Put simply, getting an A.I. to understand humor and then to generate its own jokes turns out to be extraordinarily tough.

Read more