The key to next-gen 3D vision for autonomous cars is … praying mantis goggles?

Researchers at the U.K.’s Newcastle University may have discovered a new way to more efficiently model computer vision systems — and it’s all thanks to a multi-year project that involves putting tiny 3D goggles on praying mantises.

“The 3D glasses we use are similar to the old-school 3D glasses we used to use in the cinema,” Dr. Vivek Nityananda, part of Newcastle’s Institute of Neuroscience, told Digital Trends. “The idea behind these is that having different color filters on each eye allows each eye to see a different set of images that the other eye can’t see. By manipulating the geometry of the images the eyes see we could create the 3D illusions, exactly like you see in the cinema. For our glasses, we cut out teardrop shapes from the color filters and fitted them onto the mantis using beeswax. They were then allowed to recover overnight, and we could try them in experiments the next day. Since we used beeswax, we could melt the wax and remove the glasses once the experiment was over.”

The glasses allowed the researchers to demonstrate that mantises have a way of computing stereoscopic distance to objects that differs from that of any other animal, including humans. Instead of comparing the stationary luminance patterns across the two eyes, as other vision systems do, mantises rely on matching motion or other kinds of change in each eyes’ view of the world.

praying mantis goggles 2018 on robotic arm with circuit board showing dr vivek nityananda  credit newcastle university uk
Newcastle University
Newcastle University

This could be exciting because detecting change simultaneously in both eyes is a simpler computational problem than figuring out which details of each eye’s view matches those of the other. It suggests that mantis stereo vision could be easier to model in computer vision applications and robotics, especially in situations where less computational power is available.

“So far we’ve been designing these systems to see and react to the world in the same way we do, but our brains are immensely complex and power hungry machines that may not be the ideal biological model to inspire efficient design,” Dr. Ghaith Tarawneh, another researcher on the project, told Digital Trends. “Mantis 3D vision uses the exact sort of computational trickery that challenges our way of seeing things: A view of the world radically different from ours, but evidently more fit for purpose. Adapting autonomous cars and drones after insect vision can give them the same capabilities: A superior ability to see the details that matter with shorter reaction times and longer battery life.”

A paper describing the work was recently published in the journal Current Biology.