Thanks to Apple’s impressive array of depth-sensing 3D sensors, the newer model iPhones let you unlock them using facial recognition. But what if it didn’t take on-board processing to carry out identification tasks, but it could instead be done by the glass of the smartphone display itself? It sounds crazy, but that’s exactly what researchers at the University of Wisconsin–Madison have been working to bring to life. They have developed a type of smart glass that’s able to recognize images (and maybe one day other things) without needing any sensors, circuits, or even power sources to do so.
The process relies on the way that light bends as it passes through the smart glass. This varies depending on the image facing the glass. The glass features tiny strategically placed bubbles and impurities, which bend and guide the light in very specific ways. If the light matches an expected pattern, the glass then “recognizes” the image that it sees. In a proof-of-concept demonstration, the researchers were able to use their glass to identify handwritten numbers. The light was bent to hit one of nine specific spots on the other side, each one corresponding to individual digits. The glass was even dynamic enough to detect when a handwritten “3” was altered to become an “8.”
“In this work the input information [is] encoded on an input wave front,” Erfan Khoram, one of the researchers on the project, told Digital Trends. “This light wave is then projected on the nanophotonic device that is optimized for a specific task. Once the wave enters the device, the light wave reflects, scatters, and mixes inside the medium. These wave phenomena that happen inside the device effectively cause the necessary processing on the signal to create the desired output wave profile.”
While this is a neat alternative approach to conventional A.I. image recognition systems, Khoram said that the work could be used to augment existing processes. “One other application in the same context would be for the smart glass to receive the coming waves containing the full information of the scene, and give out only certain features of the scenery,” Khoram said. “[This would have the effect of] simplifying the representation of the input to the neural network and consequently making the digital aspect of the process lighter.”
A paper describing the work was recently published in the journal Photonics Research.
- Motion-sensing shrubs and robo-Venus flytraps: Inside the world of Cyborg Botany
- Like a lens for audio, these metamaterial bricks bend, focus, and amplify sound
- U.S. Navy is working on making its fleet invisible to computerized surveillance
- June Oven Review
- Google I/O 2019: Here’s everything we saw, from Android Q to midrange Pixels