Yes, a hand with an eye embedded in it does sound a whole lot like one of the freakier scenes from Guillermo del Toro’s movie Pan’s Labyrinth. But it also describes a new smart prosthesis being developed by Newcastle University in the U.K.
What researchers there have created is a bionic hand, fitted with a camera that takes a picture of anything in front of it, so as to proactively trigger movements based on what it sees.
“Here we developed a vision-based prosthetic control solution that can identify the appropriate grasp type according to a learned abstract representation of the object, rather than the explicitly-measured object dimensions,” Ghazal Ghazaei, a co-author on the project, told Digital Trends.
The idea is to speed up the rate at which bionic limbs can operate, by bypassing some of the usual sequences that have to be followed before they carry out a task. Many amputees find prostheses slow by comparison to their remaining healthy arm or leg. However, by triggering or readying a series of movements automatically based on what is placed in front of a hand, the hope is that this prosthetic response rate can be brought more in line with that of biological limbs.
The in-built camera is equipped with image recognition algorithms, based on a convolutional neural network. This brain-inspired technology is trained on thousands of object images. Unlike a normal image recognition system, though, this one labels objects according to the type of grasp needed to interact with them. That could be anything from a pinch grip to a “palmar wrist neutral” or “palmar wrist pronated” one (that’s the grip you make when you pick up a cup versus picking up a TV remote.)
“The most exciting thing is that these features are learned automatically without any hand engineering, and the system suggests a grasp within about 40 microseconds by taking only one snapshot of an object without any measurements,” Ghazaei continued.
The technology has already been put through its paces by a small number of amputees at local hospitals, although Ghazaei said more work is needed before it’s read for primetime.
“The project is currently in the prototyping phase and further developments to the computational parts are in the pipeline,” she said. “There is also a database of everyday objects that is being created, which will enable the device to adapt to a wide range of objects and grasp types over the course of the project’s development phase. Due to the relatively low cost associated with this design, it has the potential to be implemented soon. For now, we are focusing on the development of this exciting new opportunity.”
- Deep learning vs. machine learning: what's the difference between the two?
- This homemade 8-bit computer could finally pose a challenge to Intel's 8008 CPU
- A ’bionic’ larynx sounds far more natural than regular artificial voice boxes
- Awesome Tech You Can’t Buy Yet: Haptic bass straps, musical rings, and more
- Assistive tech is progressing faster than ever, and these 7 devices prove it