Lens-free cameras are still a long way out from practical use, but the technology is getting closer with the latest from a research group at MIT.
Lensless cameras are tiny — but their slow processing times have kept them from being adopted in real-world applications. Researchers from the Massachusetts Institute of Technology, however, might just be moving the technology closer with a new technique for shooting lens-free by using time itself.
Lensless cameras are single megapixel sensors that need as many as a thousand exposures to actually create a clear picture, making them too slow to adapt into actual products. A group from the MIT Media Lab, however, has crafted a method that’s 50 times faster than earlier lensless camera attempts.
Lenses redirect the light into the camera sensor to create a sharp image. Without a lens, earlier systems had to send out a pulse of light and read that data in a randomized pattern — then do it again about 1,000 times in a different pattern in order to gather enough data to create an image.
Instead of taking thousands of exposures on that lensless sensor, the group instead uses time-of-flight imaging — the sensor essentially times how long it takes for each photon of light to reach it. Since light takes longer to reach the camera the farther away the source is, that time data gives the sensor an idea of just how far away the objects are. By assigning a time to the light, the camera can then use that data to reconstruct the scene.
The new method still requires sending the light through randomized patterns in order to make sense of the data, but only requires about 50 exposures instead of a thousand. By using both multiple exposures and time and distance data, the sensor can reconstruct a scene without a lens in less time than earlier attempts.
“Formerly, imaging required a lens, and the lens would map pixels in space to sensors in an array, with everything precisely structured and engineered,” graduate student Guy Satat said, who authored the paper along with Matthew Tancik and Ramesh Raskar. ”With computational imaging, we began to ask: Is a lens necessary? Does the sensor have to be a structured array? How many pixels should the sensor have? Is a single pixel sufficient? These questions essentially break down the fundamental idea of what a camera is. The fact that only a single pixel is required and a lens is no longer necessary relaxes major design constraints, and enables the development of novel imaging systems. Using ultrafast sensing makes the measurement significantly more efficient.”
Lens-free cameras are currently being researched for their small size and ability to compute large amounts of data, as well as for recording light outside the visible spectrum.