RadarCat was created within the university’s Computer Human Interaction research group. The radar-based sensor used in RadarCat stems from the Project Soli alpha developer kit provided by the Google Advanced Technology and Projects (ATAP) program. This sensor was originally created to detect the slightest of finger movements, but the RadarCat team saw even greater potential.
“The Soli miniature radar opens up a wide-range of new forms of touchless interaction. Once Soli is deployed in products, our RadarCat solution can revolutionize how people interact with a computer, using everyday objects that can be found in the office or home, for new applications and novel types of interaction,” said Professor Aaron Quigley, Chair of Human Computer Interaction at the university.
Google’s Soli chip is smaller than a quarter, measuring just 8mm x 10mm and packing both the sensor and the antenna array. According to Google, this chip broadcasts a wide beam of electromagnetic waves. When an object enters those waves, the energy is scattered is a specific way relative to the object. Thus, the sensor can get specific data from the energy pattern such as shape, size, orientation, and material.
“Soli tracks and recognizes dynamic gestures expressed by fine motions of the fingers and hand,” Google states. “In order to accomplish this with a single chip sensor, we developed a novel radar sensing paradigm with tailored hardware, software, and algorithms.”
As seen in the video above, the RadarCat device is connected to a Surface 3 via a USB cable. When the user places a hand over the device, the program on the laptop draws the raw radar signals as they change while the hand moves up and down. The demonstration proceeds to scan a smartphone, a metal plate, a glass of water, and more. Machine learning enables the PC to recognize what it is scanning and correctly tell its human master(s) what the object really is.
What is interesting is the RadarCat system can tell the difference between front and back. Notice in the video that the group is using a Nexus 5 smartphone in the demonstration, with RadarCat successfully identifying the phone with its screen facing down and when it is facing up. The system did the same thing with Google’s 10-inch Nexus 10 tablet.
According to the university, the team conducted three tests to show that RadarCat works. The first test comprised of 26 materials including complex composite objects while the second test consisted of 16 transparent materials with varying thicknesses and dyes. The final test included 10 body parts provided by six participants.
One benefit from RadarCat is users could find out additional information about the scanned object. For example, place an orange on RadarCat and not only will it identify the fruit, but will load up the nutritional information in the process — and in any language. The system could also be used in stores so that shoppers can compare smartphones.
To see what other applications RadarCat could provide, check out the video posted above.