So far, gesture control systems have been a hit at CES 2012, but it’s not only the big manufacturers who’re exploring the concept of using movement and gestures to manipulate on-screen images, nor are they the only ones pushing the technology forward.
The video you see below comes from a Chinese group called Sharpnow, and it demonstrates a product called SharpSight. The bank of lenses to the right of the screen registers the movements and shapes made with your hand, then translates them into commands on-screen.
For example, a clenched fist brings up a menu, then sub-menus are selected by extending the corresponding amount of fingers. Items can be grabbed and dragged around the screen, then pinched to resize them. The demo moves on to show circular movements navigating a carousel of objects, and even a sideways ‘thumbs-up’ to confirm a choice.
Obviously, the gestures seen in the video relate only to the software running on the test PC, and problems would no doubt arise at this stage with any menu consisting of more than five choices, but as a technical demonstration it’s very impressive. It’s especially good to see such subtle movements being translated so accurately. Just watch the ‘ghost’ hand mimic exactly what the operators hand is doing.
At the moment, SharpSight is a prototype, but let’s hope we see it developed more over the coming year.