While current 3D sensors in cameras like the Kinect search for all data, or light points, the researchers have developed an imaging technology that gathers only the bits of light the camera needs. When a camera does that, it can eliminate extra light or noise. Using a mathematical formula, the program is able to process data from the camera and renders the image, even when it’s taken in brighter environments; the formula can work in bright light, reflective or diffused light, or even through smoke.
“We have a way of choosing the light rays we want to capture and only those rays,” says Srinivasa Narashiman, a CMU associate professor of robotics, in a university statement. “We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor.”
The researchers, explaining how depth cameras work, used a low-power laser to project “a pattern of dots or lines over a scene. Depending on how these patterns are deformed or how much time it takes light to reflect back to the camera, it is possible to calculate the 3D contours of the scene.” It is unclear whether the technology can be implemented into existing depth-sensing cameras with a software update.
The new research could open the technology for additional applications, or enhance existing ones such as medical imaging, inspection of shiny parts, and sensing for robots in space, among others. The researchers say the technology could also be incorporated into smartphones, making the imaging technology accessible to more common uses.
- Taking shots in the dark with Night Sight, the Pixel’s newest photo feature
- The Facebook Red Manifold shows what 360 content from 16 8K lenses looks like
- Camera shootout! Testing the latest Pixel, iPhone, and Galaxy Note in real life
- What is ray tracing, and how will it change games?
- Wearable device spots signs of an opioid overdose, automatically calls for help