Artificial intelligence could soon help video cameras see lies just beyond what the lens can see — by using shadows. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have concocted an algorithm that “sees” what’s out of the video frame by analyzing the shadows and shading that out-of-view objects create. The research, Blind Inverse Light Transport by Deep Matrix Factorization, was published today, Dec. 6.
The algorithm works almost like reading shadow puppets in reverse — the computer sees the bunny-shaped shadow and is then able to create an estimate of the object that created that shadow. The computer doesn’t know what that object is, but can provide a rough outline of the shape.
The researchers used shadows and geometry to teach the program how to predict light transport, or how a light moves in a scene. When light hits an object, it scatters, creating shadows and highlights. The research team worked to “unscramble” that light from the pattern of the shading, shadows, and highlights. Further refinement helped the computer estimate the most plausible shape out of all the potential possibilities.
With an understanding of how light moves, the algorithm can then create a rough reconstruction of the object that created that shadow, even though the object itself isn’t actually in the video. The algorithm relies on two neural networks, one for the “unscramble” and another to generate the video feed of what that object looks like.
The algorithm creates a pixelated silhouette of the shape and how that shape moves. That’s not enough for creating a spy camera that sees around corners, but it does help make those scenes from CSI where the investigators pull out detail that wasn’t there before a little more plausible.
The researchers suggest that, with further refinement, the technology could be used for applications like enhancing the vision of self-driving cars. By reading the shadow information, the car could potentially see an object about to cross the road before it even enters the camera’s field of view. That application is still a long way out yet — researchers say the process currently takes about two hours to reconstruct a mystery object.
The research is based on similar work from other MIT researchers that used special lasers to see what a camera couldn’t. The new research works without any extra equipment beyond the camera, computer, and software.
- Neuro-symbolic A.I. is the future of artificial intelligence. Here’s how it works
- MIT is teaching self-driving cars how to psychoanalyze humans on the road
- Revisiting the rise of A.I.: How far has artificial intelligence come since 2010?
- Here’s what Bosch hopes to learn from deploying autonomous cars in San Jose
- Trifo’s Lucy robot vacuum won’t run over poop, doubles as a security system