In June, a company called Emteq came out of stealth mode and revealed that it’s working on a solution that will enable emotional interaction within virtual reality. Right now we can interact with virtual objects, NPCs, and the surrounding artificial environment by using our head movements and hands. That’s what the company considers as first- and second-generation VR, with the third-generation consisting of eye-tracking technology. After that, the company is hoping interaction through facial expressions will move the VR industry into its fourth generation.
Emteq’s new system is called FaceTeq, a facial sensing platform that tracks facial gestures and biometric responses. This platform detects the user’s electrical muscle activity, eye movement, heart rate and rate variability, head position, and his/her response to stress. The AI-powered FaceTeq engine grabs all of this information 1,000 times per second and translates it all in real-time, determining the user’s emotional state and physical expression.
“Our facial expressions are at the core of our social interactions, enabling us to silently, and instantly communicate our feelings, desires and intentions,” the company states. “Often called an empathy machine, VR represents a new paradigm in Human Computer Interaction; a naturalistic interface using physical motion. But the empathy machine needs emotional input, and FaceTeq provides the solution.”
EmTeq reportedly wants to use this technology to grow the social VR space. The biometric sensors used by the FaceTeq platform would be installed in the faceplate of a VR headset such as the Oculus Rift or HTC Vive. This would be far superior than using mere cameras, as the sensors would pick up on every frown, every facial twitch, every slight eye movement, and so on, accurately depicting the user in the virtual realm.
“Imagine playing an immersive role-playing game where you need to emotionally engage with a character to progress,” said Graeme Cox, chief executive and co-founder of Emteq. “With our technology, that is possible — it literally enables your computer to know when you are having a bad day or when you are tired.”
The company believes that facial expressions are a big component that’s missing in VR-based interactions. And while there are head-mounted displays (HMDs) with connected depth cameras to capture the lower portion of the user’s face, this method doesn’t track two of the most important facial details: the eyes. However, it’s those muscles surrounding the eyes that help visually define surprise, anger, and sadness. That’s the area currently covered up by HMDs.
Right now the company is aiming FaceTeq at developers, creative agencies, brands, market researchers, innovators, and health care professionals, meaning the technology isn’t just meant for social VR interaction. Researchers could use the technology to understand “the basis of our psychological and emotional responses to the world” whereas brands could use FaceTeq to generate an emotional connection with their audience.
That all said, FaceTeq is an open platform that’s low-cost and light-weight. The company believes this platform will “herald” the fourth-generation of VR, so we’ll see what the early adopters will produce. These early FaceTeq users are expected to consist of researchers, developers, and market researchers. Meanwhile, the company is currently working to partner with headset manufacturers and content creators, so stay tuned.
- Google awarded patent for using eye tracking to detect expressions in VR
- Face-scanning A.I. can help doctors spot unusual genetic disorders
- Kia wants future autonomous cars to be able to read passengers’ emotions
- Tuya Smart brings smart home security with facial recognition to CES 2019
- Ring, Amazon’s smart doorbell maker, explores technology that alerts police