Home > Photography > Scientists develop imaging tech to help 3D cameras…

Scientists develop imaging tech to help 3D cameras see in bright light

Wish you could use your Microsoft Xbox Kinect in bright light? Apparently, so did a team of researchers from Carnegie Mellon University and the University of Toronto. The computer scientists looked into why bright light and sunlight cause depth-sensing cameras like the Kinect to fail, and recently presented their findings and solutions at the SIGGRAPH 2015 conference (via RedOrbit.com) earlier this month.

While current 3D sensors in cameras like the Kinect search for all data, or light points, the researchers have developed an imaging technology that gathers only the bits of light the camera needs. When a camera does that, it can eliminate extra light or noise. Using a mathematical formula, the program is able to process data from the camera and renders the image, even when it’s taken in brighter environments; the formula can work in bright light, reflective or diffused light, or even through smoke.

“We have a way of choosing the light rays we want to capture and only those rays,” says Srinivasa Narashiman, a CMU associate professor of robotics, in a university statement. “We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor.”

Depth-sensing 3D cameras like the Microsoft Kinect are easily overwhelmed by bright light, say researchers from Carnegie Mellon University and the University of Toronto. They've developed a technology that projects a pattern onto an object or subject, which helps it determine the 3D contours under bright light.

Depth-sensing 3D cameras like the Microsoft Kinect are easily overwhelmed by bright light, say researchers from Carnegie Mellon University and the University of Toronto. They’ve developed a technology that projects a pattern onto an object or subject, which helps it determine the 3D contours under bright light.

Related: Disney’s new ‘team rendering’ makes 3D animation easier

The researchers, explaining how depth cameras work, used a low-power laser to project “a pattern of dots or lines over a scene. Depending on how these patterns are deformed or how much time it takes light to reflect back to the camera, it is possible to calculate the 3D contours of the scene.” It is unclear whether the technology can be implemented into existing depth-sensing cameras with a software update.

The new research could open the technology for additional applications, or enhance existing ones such as medical imaging, inspection of shiny parts, and sensing for robots in space, among others. The researchers say the technology could also be incorporated into smartphones, making the imaging technology accessible to more common uses.