Skip to main content

To build a lifelike robotic hand, we first have to build a better robotic brain

Our hands are like a bridge between the intentions laid out by the brain and the physical world, carrying out our wishes by letting us turn thoughts into actions. If robots are going to truly live up to their potential when it comes to interaction, it’s crucial that they therefore have some similar instrument at their disposal.

We know that roboticists are building some astonishingly intricate robot hands already. But they also need the smarts to control them — being capable of properly gripping objects both according to their shape and their hardness or softness. You don’t want your future robot co-worker to crush your hand into gory mush when it shakes hands with you on its first day in the office.

Fortunately, this is what researchers from Germany have been working on with a new, more brain-inspired neural network that can allow a robotic hand (in this case, an existing model called a Schunk SVH 5-finger hand) to learn how to pick up objects of different shapes and hardness levels by selecting the correct grasping motion. In a proof-of-concept demonstration, the robot hand was able to pick up an unusual range of objects including — but not limited to — a plastic bottle, tennis ball, sponge, rubber duck, pen, and an assortment of balloons.

Robot arm gripper
FZI Forschungszentrum Informatik Karlsruhe

“Our approach has two main components: The modeling of motion of the hand, and the compliant control,” Juan Camilo Vasquez Tieck, a research scientist at FZI Forschungszentrum Informatik in Karlsruhe, Germany, told Digital Trends. “The hand is modeled in a hierarchy of different layers, and the motion is represented with motion primitives. All the joints of one finger are coordinated by a finger-primitive. For one particular grasping motion, all the fingers are coordinated by a hand-primitive.”

In other words, he explained, it can close its hand in different ways.

The system represents a different way of developing robotic systems for carrying out these kinds of actions. The neural network involved allows the hand to grasp more intelligently, making real-time adaptations where necessary.

Spiking neural networks (SNN) are a special kind of artificial neural networks that model closer the way real neurons work,” Tieck continued. “There are many spiking neuron models based on neuroscience research. For this work, we used leaky integrate and fire (LIF) neurons. The communication between neurons is event-based, using spikes. Spikes are discrete impulses, and not a continuous signal. This … reduces the amount of information being sent between neurons and provides great power efficiency.”

A paper describing the work was recently published in the journal IEEE Robotics and Automation Letters.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
This basic human skill is the next major milestone for A.I.
profile of head on computer chip artificial intelligence

Remember the amazing, revelatory feeling when you first discovered the existence of cause and effect? That’s a trick question. Kids start learning the principle of causality from as early as eight months old, helping them to make rudimentary inferences about the world around them. But most of us don’t remember much before the age of around three or four, so the important lesson of “why” is something we simply take for granted.

It’s not only a crucial lesson for humans to learn, but also one that today’s artificial intelligence systems are pretty darn bad at. While modern A.I. is capable of beating human players at Go and driving cars on busy streets, this is not necessarily comparable with the kind of intelligence humans might use to master these abilities. That’s because humans -- even small infants -- possess the ability to generalize by applying knowledge from one domain to another. For A.I. to live up to its potential, this is something it also needs to be able to do.

Read more
New ‘A.I. lawyer’ analyzes your emails to find moneysaving loopholes
Joshua Browder parking ticket legal robot

Email systems have gotten smarter. Whether it’s filtering out spam, prioritizing the messages we need to respond to, reminding us when we’ve forgotten to include a mentioned attachment, or suggesting appropriate responses, 2020 email has come a long way from the basic inboxes of yesteryear. But there’s still further they can go -- and Joshua Browder, the creator of the robot lawyer service DoNotPay, believes he’s come up with a way to make email even more user-friendly. (Hint: It involves saving people money.)

Browder, for those unfamiliar with him, is the legal tech genius who has been creating automated legal bots for the past several years. Whether it’s helping appeal parking fines (where the original DoNotPay name came from) or aiding people in gaining unemployment benefits, he’s focused on one consumer rights area after the other to disrupt through automation.

Read more
GTC 2020 roundup: Nvidia’s virtual world for robots, A.I. video calls
Nvidia CEO Jensen Huang with a few Nvidia graphics cards.

"The age of A.I. has begun," Nvidia CEO Jensen Huang declared at this year's GTC. At its GPU Technology Conference this year, Nvidia showcased its innovation to further A.I., noting how the technology could help solve the world's problems 10 times better and faster.

While Nvidia is most well-known for its graphics cards -- and more recently associated with real-time ray tracing -- the company is also driving the behind-the-scenes innovation that brings artificial intelligence into our daily lives, from warehouse robots that pack our shipping orders, to self-driving cars and natural language bots that deliver news, search, and information with little latency or delay.

Read more