World’s most advanced robotic hand is approaching human-level dexterity

Remember when the idea of a robotic hand was a clunky mitt that could do little more than crush things in its iron grip? Well, such clichés should be banished for good based on some impressive work coming out of the WMG department at the U.K.’s University of Warwick.

If the research lives up to its potential, robot hands could pretty soon be every bit as nimble as their flesh-and-blood counterparts. And it’s all thanks to some impressive simulation-based training, new A.I. algorithms, and the Shadow Robot Dexterous Hand created by the U.K.-based Shadow Robot Company (which Digital Trends has covered in detail before.)

Researchers at WMG Warwick have developed algorithms that can imbue the Dexterous Hand with impressive manipulation capabilities, enabling two robot hands to throw objects to one another or spin a pen around between their fingers.

“The Shadow Robot Company [is] manufacturing a robotic hand that is very similar to the human hand,” Giovanni Montana, professor of Data Science, told Digital Trends. “However, so far this has mostly been used for teleoperation applications, where a human operator controls the hand remotely. Our research aims at giving the hand the ability to learn how to manipulate objects on its own, without human intervention. In terms of demonstrating new abilities, we’ve focused on hand manipulation tasks that are deemed very difficult to learn.”

In a paper titled “Solving Challenging Dexterous Manipulation Tasks With Trajectory Optimisation and Reinforcement Learning,” the Warwick researchers created 3D simulations of the hands using a physics engine called MuJoCo (Multi-Joint Dynamics with Contact) that was developed at the University of Washington.

The work — which is currently still in progress — is impressive because it showcases robot hand tasks requiring two hands, such as catching. This adds extra difficulty to the learning process. The researchers think the algorithms represent one of the most impressive examples to date of autonomously learning to complete challenging, dexterous manipulation tasks.

The algorithms that power the hands

The breakthrough involves two algorithms. First, a planning algorithm produces examples of how the task should be performed. Then a reinforcement learning algorithm, which learns through trial and error, practices repeatedly to achieve this action flawlessly. It does this using a reward function to assess how well it’s doing.

“Ideally, you want to define a reward which is simple to specify, and doesn’t require a huge amount of engineering [and] tweaking, but which is also able to provide regular feedback to guide the learning,” Henry Charlesworth, another researcher on the project, told Digital Trends. “In the case of the pen-spinning task, we define a simple reward based on the pen’s angular velocity, as well as a slight negative reward based on how far the pen deviates from lying in the horizontal plane. In this case, ‘better’ means the pen is rotating as fast as possible whilst remaining ‘horizontal’ relative to the hand.”

Functional robotic hands aren’t just a cool demo. They could have plenty of applications in the real world. For example, more capable robotic hands could be useful in computer assembly where assembling microchips requires a level of precision that currently only human hands can achieve. They could also be used in robotic surgery, an application the Warwick researchers are currently investigating.

There’s a catch, though: Currently, the hand algorithms, which show almost human levels of motion, have only been demonstrated in virtual reality simulation. Translating the algorithms to physical hardware is the next step of the project.

“It does definitely add an extra layer of complexity, because although the simulator is reasonably accurate, it can never be perfect,” Charlesworth said. “This means that a policy you train in the simulated environment cannot be directly transferred to a physical hand. However, there has been a lot of successful work recently that looks at how you can make a policy trained in simulation more robust, such that it can operate on a physical robot.”

Editors' Recommendations