The always-on camera uses the gesture recognition to improve efficiency — since the camera doesn’t have to analyze every pixel to determine whether or not to record. “What this camera is actually looking at is not pixel values, but pixels added together in all different ways and a dramatically smaller number of measurements than if you had it in standard mode,” said Georgia Tech School of Electrical and Computer Engineering professor Justin Romberg.
With less to analyze, the software gives the camera a big efficiency boost, which is improved even more by a lower frame rate as well as hardware changes. That efficiency is essential for many remote applications, researchers say. The camera could also be used in surveillance and robotics or, more simply, for taking selfies hands-free.
“Cameras are being added to more and more devices these days, but they don’t have much interactivity,” said Arijit Raychowdhury, an associate professor in the Georgia Tech School of Electrical and Computer Engineering. “What we are studying are smart cameras that can look at something specific in the environment at extreme energy efficiencies and process the data for us.”
The group is working to make the camera energy efficient enough that it can run solely on ambient energy, such as solar power. Along with conserving energy, the gesture feature could also prevent a data overload — essentially running out of room on an SD card or internal hard drive — from recording unnecessary images, in the way that wind can trigger a traditional motion sensor.
The team, which also includes Anvesha A, Shaojie Xu and Ningyuan Cao, is next looking to add wireless capabilities to send the images to another device remotely. The project is being supported by Intel Corp. and the National Science Foundation.