Skip to main content

Scientists develop imaging tech to help 3D cameras see in bright light

Homogeneous Codes for Energy-Efficient Illumination and Imaging
Wish you could use your Microsoft Xbox Kinect in bright light? Apparently, so did a team of researchers from Carnegie Mellon University and the University of Toronto. The computer scientists looked into why bright light and sunlight cause depth-sensing cameras like the Kinect to fail, and recently presented their findings and solutions at the SIGGRAPH 2015 conference (via RedOrbit.com) earlier this month.

While current 3D sensors in cameras like the Kinect search for all data, or light points, the researchers have developed an imaging technology that gathers only the bits of light the camera needs. When a camera does that, it can eliminate extra light or noise. Using a mathematical formula, the program is able to process data from the camera and renders the image, even when it’s taken in brighter environments; the formula can work in bright light, reflective or diffused light, or even through smoke.

“We have a way of choosing the light rays we want to capture and only those rays,” says Srinivasa Narashiman, a CMU associate professor of robotics, in a university statement. “We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor.”

Depth-sensing 3D cameras like the Microsoft Kinect are easily overwhelmed by bright light, say researchers from Carnegie Mellon University and the University of Toronto. They've developed a technology that projects a pattern onto an object or subject, which helps it determine the 3D contours under bright light.
Depth-sensing 3D cameras like the Microsoft Kinect are easily overwhelmed by bright light, say researchers from Carnegie Mellon University and the University of Toronto. They’ve developed a technology that projects a pattern onto an object or subject, which helps it determine the 3D contours under bright light. Carnegie Mellon University

The researchers, explaining how depth cameras work, used a low-power laser to project “a pattern of dots or lines over a scene. Depending on how these patterns are deformed or how much time it takes light to reflect back to the camera, it is possible to calculate the 3D contours of the scene.” It is unclear whether the technology can be implemented into existing depth-sensing cameras with a software update.

The new research could open the technology for additional applications, or enhance existing ones such as medical imaging, inspection of shiny parts, and sensing for robots in space, among others. The researchers say the technology could also be incorporated into smartphones, making the imaging technology accessible to more common uses.

Editors' Recommendations

Enid Burns
Enid Burns is a freelance writer who has covered consumer electronics, online advertising, mobile, technology electronic…
Ford’s 3D knitting tech ensures your seats won’t burst at the seams
ford 3d knitting technology is the future of car upholstery

Ford: 3D knitting - The Future of Interior Fabrics

The process of making seat covers hasn't changed significantly for decades, according to Ford. The Blue Oval hopes to pelt its upholstery department into the 21st century thanks to 3D knitting technology that opens up a world of opportunities.

Read more
Selfies just went 3D with Snapchat’s new camera mode that responds to movement
snapchat 3d camera mode snapchat3dselfie2

Your Snaps. Now in 3D.

Snapchat is giving Snaps depth with just a wiggle of the smartphone in the new 3D Camera Mode. On Tuesday, September 17, Snap Inc. unveiled 3D images and 3D filters on the platform, a camera mode that uses the smartphone’s depth data to create a dimensional image complete with Snapchat’s iconic augmented reality lenses.

Read more
Edelkrone’s 3D-printed wearable monopod fits in your pocket and your budget
edelkrone pocketshot 3d announced pocketshot3d1

Previous

Next

Read more