The concept of gesture control isn’t anything new – think of Nintendo’s clumsy Power Glove, which dates all the way back to 1989. In practice, however, we’ve been miles away from the effortless, graceful, almost instinctual computer interaction depicted in sci-fi movies such as Minority Report.
Thalmic Labs, a year-old company based out of Waterloo, Ontario, is aiming to change all that with a futuristic armband called the MYO (my-yo). Named after a Greek prefix meaning “muscle,” the MYO reads electrical muscle activity directly, promising seamless interaction with a broad range of technology. A YouTube demo video for the MYO (below), for instance, shows a man snapping his fingers to start an iTunes track, or throwing up “devil horns” to share a skiing clip on Facebook.
To use MYO, you strap it on the arm and perform gestures to issue commands. The company claims it’ll work out of the box with Macs and PCs when it ships sometime in late 2013 (the company’s taking pre-orders for its second shipment of MYOs shipping early 2014), but it could also be used for iOS and Android. The arm band communicates with whatever computer or device you’re using it with via a low-power Bluetooth connection. While the arm band itself won’t be available until late into the year, the API will be available this summer for developers to design apps for use with the armband.
We first covered the MYO during a pre-order rush that ended up selling more than 25,000 armbands. To learn more about the device, we went straight to the source: Aaron Grant, one of the three founders of Thalmic Labs.
In a Skype interview, Grant shared with us what it’s like to start a company straight out of college, how the MYO is different from other gesture control devices, what some of the most interesting ideas people have for using the MYO, and what he sees for the future of gesture control technology.
Digital Trends: So what makes the MYO really stand out from other gesture control devices like Leap Motion’s Leap Controller or Microsoft’s Kinect? How does it work?
Aaron Grant: Gesture control and this whole idea of more natural ways of interacting with your technology is taking off right now, but we’re not convinced that cameras are the way to do it. So, we’ve developed a system that’s completely different from that. It’s an armband that you wear around your forearm, which actually measures the electrical activity from your muscles. Based on that electrical activity, we can determine the pose that your hand is in at any given time, whether you’re making a fist, whether you’re touching your thumb to your index finger or your thumb to your ring finger, whether you’re swiping left to right or up and down. We can get all those different hand positions just from reading the muscle activity in your arm.
How has this kind of electronic muscle activity been used in technology previously?
There’s been some research done in terms of prosthetics, like reading the muscle signals and being able to control, for example, a prosthetic hand. The actual underlying technology, which is called electromyography or EMG, has been around for a long time in clinical settings. It’s similar to ECG, which is for your heart, so if you’re ever in the hospital and you’re hooked up to the little machine showing your heartbeat, it’s very similar to that.
From an insider’s perspective, what are some of the challenges that companies have faced in developing gesture-control devices in the past?
[pullquote]We think that the next evolution of computing are going to be things that are more closely integrated with you as a person.[/pullquote]In the last year, we’ve seen tons of different people trying to do gesture control, whether it’s the game companies like Microsoft Kinect or PlayStation Move, or other companies like Leap Motion. But everybody else is doing a system that’s based on cameras. We feel that there are a lot of disadvantages to that, the most obvious one being that the camera actually has to be able to see you, essentially constraining you. For the Leap Motion device, you have to hold your hands above the device itself. It’s a relatively small workspace that you can actually move around in. There’s also a problem with being outside – the sunlight affects the camera, because they all use some sort of infrared system, so you can’t use it if there’s direct sunlight. And then, say you’re playing a multi-player game – there’s the issue of occlusion. Say you step in front of somebody, between that person and the camera. Then it can’t see them anymore.
What is the MYO like in action? What does it feel like to wear?
For the moment, it’s going to be this one device that works with everything. The device we’re shipping late this year will be a one-size-fits-all device, so a unique design that will fit the vast majority of the population, including kids 12 and up. The benefit of that is that you don’t have to have one that just fits you. If your friends come over, they can put it on and use it. Basically, it’s an expandable line that can grow or shrink as necessary. Lots of the time, I just wear one of our prototypes around all day. It’s not even the nice, final, industrial design version, and after a while you just don’t even notice it.
In terms of your own background, what was it like starting Thalmic Labs straight out of college at Waterloo University?
The program is called Mechatronics Engineering, so basically an intersection of mechanical, software, electrical, and systems integration – really a jack-of-all-trades program. There’s three co-founders, myself and two others who also graduated from Waterloo (Stephen Lake and Matthew Bailey). Same time, same class. We all knew that we wanted to work with each other and that we didn’t want to go off and work for big companies. We wanted to do something ourselves. The three of us graduated last April, and then founded the company about a week later. It’s certainly been quite a ride so far, and it’s going to be a pretty awesome year.
How did you feel seeing such a positive response to your pre-order campaign?
We were definitely excited, and I guess a bit relieved to see that it all went well. It’s great to see the success we’ve had so far with the preorder campaign, and it’s good to see that people actually want this. Up till that point, we had put in a ton of work developing the technology and getting it to a point where it actually works. We’ve had lots of software developers emailing us with cool ideas … every day we get emails from people who want to use it for things we’ve never even thought of before.
How did you all decide to get started in gesture control together?
I mean, this is kind of cliché, but we were literally sitting in a local bar and we were talking about the future of computing, just throwing around ideas, and one of the things we were talking about were these new interfaces – things such as Google Glass, and where that kind of stuff is going. And what we realized was, there’s been a lot of work done on the output modalities, but not as much work done on new ways of actually interacting with this new technology. If the output technology is evolving, then the input technology is going to evolve with it.
What background did you have in gesture control, other than the program at Waterloo?
We hadn’t worked specifically in gesture control, but we all specialized in separate areas that actually made a product like this possible. One of my co-founders, Steve, spent a lot of time in the medical industry developing medical devices, which is where the idea to use muscle activity as the input came from. Myself, I have a pretty solid software background, and my third co-founder, Matt, has specialized mostly in mechanical product design, and more recently in machine intelligence.
What are some of the most interesting ways people want to use the MYO?
We’ve had a couple of doctors contact us who want to use this when they’re in the operating room. They’re scrubbed in, so they can’t actually touch anything with their hands, but they need to navigate the patient’s diagnostic imagery to locate whatever they need to work with for the surgery. Another one that we’ve had is DJs and entertainers contacting us – they want to use it to control their lights and effects on stage. So just a huge number of different things, a very wide variety. That’s just scratching the surface.
Has anyone had a completely outrageous idea for how to use the MYO?
We had the Star Wars reference on our website, “release your inner Jedi,” and some people have said that they want to use [the MYO] to control their kids. They suggested that we should make a complementary device, a collar that goes around a dog’s neck or a child’s neck. That was a pretty out-there, silly one. I wouldn’t be surprised if someone actually built that. It’s kind of creepy, yeah, but that’s definitely one that I remembered.
Do you see the MYO as a sort of fun Jedi mind-trick, or do you think it will fundamentally change the way people interact with technology?
Definitely the latter. That’s absolutely our goal. We don’t want to be another gimmick that you use once and put aside. Since personal computing took off in the early ‘80s, the form factor is constantly evolving. First you had these clunky desktop computers, and then you had laptops, then you have smartphones and tablets. And along with that, the input technology is also evolving. So when the iPhone came out and Apple introduced multi-touch, that’s a new way of interacting with your technology. It’s not a mouse, it’s not a keyboard, it’s something completely new. I definitely think that’s going to continue to evolve, and our goal is to be at the forefront of that.
Has anything surprised you about the pre-order phase so far?
Actually, interestingly, looking at our pre-order so far, we were expecting that the majority of people would be developers and hackers – Silicon Valley kind of people. But we’ve actually had only 30 percent of the orders from [them]. The rest are just general consumers who want to use it for everyday things. Additionally, only 40 percent of our orders have been within North America. Right now, we have orders from around 130 different countries, [including] Germany, Great Britain, Australia, China, Russia, [and] France.
What are you most excited about in the realm of gesture control for the future?
In the long term, I’m excited about this being something you would wear all the time. We think that the next evolution of computing are going to be things that are more closely integrated with you as a person. The buzzword we use is “contextual computing.” That’s definitely what I’m most excited about.
[MYO images copyright: Thalmic Labs]
- Google’s radar-sensing tech could make any object smart
- How 5G networks will make low-latency game streaming a reality
- Are we living in a simulation? This MIT scientist says it’s more likely than not
- How emotion-tracking A.I. will change computing as we know it
- Amazon workers listening to Alexa recordings isn’t a big deal. Here’s why