Skip to main content

New ‘shady’ research from MIT uses shadows to see what cameras can’t

Computational Mirrors: Revealing Hidden Video

Artificial intelligence could soon help video cameras see lies just beyond what the lens can see — by using shadows. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have concocted an algorithm that “sees” what’s out of the video frame by analyzing the shadows and shading that out-of-view objects create. The research, Blind Inverse Light Transport by Deep Matrix Factorization, was published today, Dec. 6.

The algorithm works almost like reading shadow puppets in reverse — the computer sees the bunny-shaped shadow and is then able to create an estimate of the object that created that shadow. The computer doesn’t know what that object is, but can provide a rough outline of the shape.

Related Videos

The researchers used shadows and geometry to teach the program how to predict light transport, or how a light moves in a scene. When light hits an object, it scatters, creating shadows and highlights. The research team worked to “unscramble” that light from the pattern of the shading, shadows, and highlights. Further refinement helped the computer estimate the most plausible shape out of all the potential possibilities.

With an understanding of how light moves, the algorithm can then create a rough reconstruction of the object that created that shadow, even though the object itself isn’t actually in the video. The algorithm relies on two neural networks, one for the “unscramble” and another to generate the video feed of what that object looks like.

The algorithm creates a pixelated silhouette of the shape and how that shape moves. That’s not enough for creating a spy camera that sees around corners, but it does help make those scenes from CSI where the investigators pull out detail that wasn’t there before a little more plausible.

The researchers suggest that, with further refinement, the technology could be used for applications like enhancing the vision of self-driving cars. By reading the shadow information, the car could potentially see an object about to cross the road before it even enters the camera’s field of view. That application is still a long way out yet — researchers say the process currently takes about two hours to reconstruct a mystery object.

The research is based on similar work from other MIT researchers that used special lasers to see what a camera couldn’t. The new research works without any extra equipment beyond the camera, computer, and software.

Editors' Recommendations

New cardiology A.I. knows if you’ll die soon. Doctors can’t explain how it works
cardiology ai predicts death toe tag

Here’s a scenario scary enough to warrant a horror movie: An artificial intelligence that is able to accurately predict your chances of dying in the next year by looking at heart test results, despite the fact that the results may look totally fine to trained doctors. The good news: The technology might just wind up saving your life one day.

“We have developed two different artificial intelligence algorithms that can automatically analyze electrical tracings from the heart and make predictions about the likelihood of a future important clinical event,” Brandon Fornwalt, from Pennsylvania-based healthcare provider Geisinger, told Digital Trends.

Read more
Researchers use A.I. to make smiling pet pics — and it’s as creepy as it sounds
nvidia ganimal ai research smiling pet

Can’t get your dog or that tiger at the zoo to smile for your Instagram? A new artificially intelligent program developed by researchers from Nvidia can take the expression from one animal and put it on the photo of another animal. Called GANimal -- after generative adversarial networks, a type of A.I. -- the software allows users to upload an image of one animal to re-create the pet’s expression and pose on another animal.

GAN programs are designed to convert one image to look like another, but are typically focused on more narrow tasks like turning horses to zebras. GANimal, however, applies several different changes to the image, adjusting the expression, the position of the animal’s head, and in many cases, even the background, from the inspiration image onto the source image. Unlike most GANs, the program is designed to work with any animal.

Read more
MIT’s new drone can hover like a quadcopter, soar like a plane
best drones under 100 parrot swing

Whether it’s in science fiction movies or according to the reported sightings of members of the general public, one repeated claim about so-called flying saucers is that they possess an extraordinary degree of maneuverability. One moment they could be hovering, the next moving rapidly vertically and, the next, speeding horizontally like a jet plane. It’s a movement that screams "alien presence" because, frankly, no earthbound vehicle is capable of pulling off such feats.

Of course, that’s exactly the kind of thing that sounds like a challenge to the researchers at MIT’s renowned Computer Science & Artificial Intelligence Laboratory (CSAIL). They have designed a new type of drone which can turn on a dime from hovering like an ordinary quadcopter to swooping and gliding like a fixed-wing airplane. In doing so, they may just have solved solve some of the biggest challenges which exist with modern drones.

Read more