Surely you’ve seen, either in movies or educational shows, those artificial intelligence (AI) computers that you interact with through various cables, or input leads, connected to your fingers, your hands, your head, and your feet. Depending on the sophistication of the devices and the software, nearly all parts of your body create input for the AI computer. Now, imagine interacting with, even controlling, your computer via hand and head movements, even facial expressions, without the input leads and cables.
Or maybe you want to control your computer with voice commands, like iPads and Android devices? Enter RealSense and VoiceAssist, two new interactivity enhancement features slated for the next generation of Intel CPUs.
If it all works the way Intel claims, you’ll soon be interacting with your computer via voice commands, hand, and head gestures, rather than actual physical pointing devices and keyboards. Here’s a real sense of how Intel’s new RealSense and VoiceAssist technologies work.
What is RealSense?
As part of Intel’s fifth-gen processor technology enhancements, “RealSense devices can see in 3D.” Among other things, what this means is that objects, such as, say, your hands, your face, and other objects, can be separated from backgrounds in real-time, which lets your computer capture the shape and depth of objects.
A primary component of RealSense technology is its 3D camera, which lets you capture 3D scans of people and objects for any number of uses. RealSense also lets you interact with the device via hand gestures. In fact, RealSense supports 22 tracking points per hand in 3D space.
The RealSense 3D camera
Much of the RealSense experience will come from something called an Intel RealSense 3D Camera. Intel describes how RealSense and the camera works like this: “Devices with Intel RealSense 3D camera have three lenses: a conventional camera, an infrared camera, and an infrared laser projector. Together, the three lenses allow the device to infer depth by detecting infrared light that has bounced back from objects in front of it. This visual data, taken in combination with Intel RealSense motion-tracking software, creates a touch-free interface that responds to hand, arm, and head motions as well as facial expressions.”
Intel says that the 3D camera will let you interact with your device more like you interact with people — with natural movements. While there’s nothing particularly natural about gesturing in front of a computer screen, the ability to gesture midair to navigate and manipulate your computer is intriguing.
Intel has been working with voice-recognition leader Nuance, maker of Dragon Dictate, for quite some time, in an attempt to get voice commands integrated into Intel’s chips. Looks like it’ll be soon. Between RealSense 3D and VoiceAssist, we should have several new ways to interact with our computers, hybrids, and 2-in-1s—and soon.