Skip to main content

Leaps, bounds, and beyond: Robot agility is progressing at a feverish pace

Cassie robot learns to hop, run and skip

When Charles Rosen, the A.I. pioneer who founded SRI International’s Artificial Intelligence Center, was asked to come up with a name for the world’s first general -purpose mobile robot, he thought for a moment and then said: “Well, it shakes like hell when it moves. Let’s just call it Shakey.”

Some variation of this idea has pervaded for much of the history of modern robotics. Robots, we often assume, are clunky machines with as much grace as an atheist’s Sunday lunch. Even science fiction movies have repeatedly imagined robots as ungainly creations that walk with slow, halting steps.

Related Videos

That idea simply no longer lines up with reality.

Recently, a group of researchers from the Dynamic Robotics Laboratory at Oregon State took one of the university’s Cassie robots, a pair of walking robot legs that resembles the lower extremities of an ostrich, to a sports field to try out the lab’s latest “bipedal gait” algorithms. Once there, the robot hopped, walked, cantered, and galloped, switching seamlessly between each type of motion without having to slow down. It was an impressive demonstration, and one that speaks to the agility of current legged robots — especially when a bit of deep learning-based training is involved.

OSU/Agility Robotics

“Usually, when people apply deep reinforcement learning to robotics, they use reward functions that boil down to rewarding the neural network for closely mimicking a reference trajectory,” Jonah Siekmann, one of the researchers on the project, told Digital Trends. “Collecting this reference trajectory in the first place can be pretty difficult, and once you have a ‘running’ reference trajectory, it’s not very clear if you can also use that to learn a ‘skipping’ behavior, or even a ‘walking’ behavior.”

In the OSU work, the team created a reward paradigm that scrapped the idea of reference trajectories completely. Instead, it breaks up chunks of time into “phases,” penalizing the robot for having a specific foot on the ground during a certain phase, while allowing it to do so at other points. The neural network then figures out “all the hard stuff” — such as the position the joints should be in, how much torque to apply at each joint, how to remain stable and upright — to create a reward-based design paradigm that makes it easy for robots like Cassie to learn just about any bipedal gait found in nature.

Predicting the future

It’s an impressive feat, to be sure. But it also brings about a larger question: How on Earth did robots get so agile? While there are still no shortage of videos online showing robots collapsing when things go wrong, there is also no doubt that the overall path they are on is one that’s headed toward impressively smooth locomotion. Once the idea of a robot cantering like a pony or performing a picture-perfect athletic routine would have been far-fetched even for a movie. In 2020, robots are getting there.

Predicting these advances isn’t easy, however. There is no simple Moore’s Law-type observation that makes it easy to map out the path robots are taking from clunky machines to smooth operators.

Moore’s Law refers to the observation made by Intel engineer Gordon Moore in 1965 that, every one to two years, the number of components that could be squeezed onto an integrated circuit will double. While there’s an argument to be made that we may now be reaching the limits of Moore’s Law, a researcher in, say, 1991 could realistically work out, on the back of an envelope, where computer capabilities might be, in terms of calculations, in 2021. Things are more complex for robots.


“Even though Moore’s Law forecasted the trend in compute power astonishingly well, forecasting a trend in legged robots is like gazing into a crystal ball,” Christian Gehring, chief technology officer at ANYbotics AG, a Swiss company making legged robots that are already being used for tasks like autonomously inspecting offshore energy platforms, told Digital Trends. “In essence, legged robots are highly integrated systems relying on many different technologies like energy storage, sensing, acting, computing, networking and intelligence.”

It’s advances in this conflation of different technologies working together that make today’s robots so powerful. It is also what makes them tough to predict as far as the road map of future development goes. To build the kinds of robots that roboticists would like, there needs to be advances in the creation of small and lightweight batteries, sensing and perception capabilities, cellular communications, and more. All of these will need to work together with advances in fields like dee- learning A.I. to create the kinds of machines that will forever banish images of clunky science fiction bots we grew up watching on TV.

Smaller, cheaper, better

The good news is that it’s happening. While Moore’s Law leads to advances on the software side, essential hardware components are getting smaller and cheaper, too. It’s not as neat as Gordon Moore’s formulation, but it is happening.

“Even with our Atreus science demonstrator [robot] from six or eight years ago, the power amplifiers to run our motors were these three-pound bricks; they were big,” Jonathan Hurst, co-founder of Agility Robotics, which built the aforementioned Cassie robot, told Digital Trends. “Since then, we’ve got these little, tiny amplifiers that have the same amount of current, the same amount of voltage, and give us very good control over the torque output of our motors. And they’re tiny — only an inch by two inches by a half-inch high or something like that. We’ve got 10 of those on Cassie. That adds up. You’ve got a three-pound brick that’s six inches by four inches by four inches versus maybe a couple ounces that’s an inch by two inches. It makes a big difference with things like the power electronics.”

UW ECE Research Colloquium, October 20, 2020: Jonathan Hurst, Oregon State University

Hurst said he believes legged robots are still in the early stages of their path to becoming ubiquitous technologies that can not only move in a naturalistic way like humans, but function seamlessly alongside them. Some of these challenges will go way beyond cute (but extremely impressive) demos like making robots canter like ponies. But building smarter machines that can master different kinds of movement, and be trusted to operate in the real world, is certainly an important step.

It’s a step (or steps) that walking robots are getting better and better at all the time.

Editors' Recommendations

Watch this robot tiptoe over stepping stones with human-like grace and agility
bipedal robot walks over stepping stones cinderblocksexp

Dynamic Bipedal Locomotion over Stochastic Discrete Terrain

Ever see a kid walking across stepping stones in a park? Even if the stepping stones are an uneven distance apart and at slightly varying heights, this is a skill that most children have picked up at just a few years old. By the time we’re adults, the ability to swiftly walk or even run over this kind of terrain is something the majority of people can do without a second thought. But it’s not an easy task for robots -- since it requires very precise foot placement in which even small errors can lead to potential falls.

Read more
Bust a move! A German robot dances to communicate with honeybees
robot dances honeybees robobee figure2abeerobot

Honeybee Robot Talks to a Bee

Humans use tools like Google Maps to tell us the location of our nearest restaurant or supermarket, and very soon foraging bees might get a similarly high-tech helping hand. Researchers at Germany’s Free University of Berlin have developed the RoboBee robot, which shows the best foraging locations by mimicking a dance that bees employ to relay this information to one another.

Read more
Robot hand is dexterous enough to screw in a lightbulb, turn a screwdriver
University of California San Diego

How many robots does it take to screw in a light bulb? If you’re talking about a new soft robotic gripper developed by engineers at the University of California, San Diego, the answer is just one. The soft robot gripper in question is able to pick up and manipulate objects based on touch alone, meaning that it can do so regardless of lighting conditions.

“In this work, we developed a soft gripper that uses tactile sensing to model the objects it's interacting with,” Michael Tolley, a roboticist at UC San Diego, told Digital Trends. “By rotating the object around in-hand, similar to what you would do when you reach into your pocket and feel for your keys, the gripper can map out a point cloud representing the object. Our gripper is unique in its ability to twist, sense, and model objects, allowing the gripper to operate in low light, low visibility, and uncertain conditions.”

Read more