Skip to main content

Watching artificial intelligence teach itself how to walk is weirdly captivating

DeepLoco: Highlights
Do you remember the adorable scene in Bambi where Thumper the rabbit teaches Disney’s lovable deer how to walk? Well, computer scientists from the University of British Columbia and National University of Singapore just did that with a bipedal computer model (read: essentially a pair of animated legs) — only instead of a cute cartoon rabbit, the teacher is a deep reinforcement learning artificial intelligence algorithm.

Called DeepLoco, the work was shown off this week at SIGGRAPH 2017, probably the world’s leading computer graphics conference. While we have had realistic CGI that is capable of mimicking realistic walking motions for years, what makes this work so nifty is that it uses reinforcement learning to optimize a solution.

Reinforcement learning, for those unfamiliar with it, is a school of machine learning in which software agents learn to take actions that will maximize their reward. Google’s DeepMind, for example, has used reinforcement learning to teach an AI to play classic video games by working out how to achieve high scores.

In the case of DeepLoco, the reward is getting from Point A to Point B in the most efficient manner possible, all while being challenged by everything from navigating narrow cliffs to surviving bombardments of objects. As it does this, it learns from its environment in order to discover how to balance, walk, and even dribble a soccer ball. It’s like watching your kid grow up — except that, you know, in this case, your kid is a pair of disembodied AI legs powered by Skynet!

Nonetheless, it is another intriguing example of the power of reinforcement learning. While the technology could be applied in any number of ways (such as by animators wanting to more easily animate giant computer-generated crowd scenes in movies), its most game-changing use would almost certainly be in robotics. Applied to some of the cutting-edge walking robots we have seen from companies like Boston Dynamics, DeepLoco could help develop robots that are able to more intuitively move through a range of environments.

A paper describing the work, titled “DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning” was published in the journal Transactions on Graphics.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more
MIT’s latest artificial intelligence can rewrite outdated Wikipedia pages
wikipedia asiacell iraq mwc2017 politics

Writers, editors, and researchers: Hold on to your red pens. Because MIT-powered A.I. may be coming for your jobs. 

A new “text-generating system” created by the brains behind Massachusetts Institute of Technology may be the beginning of the end for all human editing jobs. The system, announced in a press release Wednesday, is able to rummage through the millions of Wikipedia pages, sniff around for outdated data, and replace it with the most recent information available on the internet in a “human-like” style -- thus making the need for real, hot-blooded editors basically obsolete. 

Read more
Neuro-symbolic A.I. is the future of artificial intelligence. Here’s how it works
IBM Watson Shapes

Picture a tray. On the tray is an assortment of shapes: Some cubes, others spheres. The shapes are made from a variety of different materials and represent an assortment of sizes. In total there are, perhaps, eight objects. My question: “Looking at the objects, are there an equal number of large things and metal spheres?”

It’s not a trick question. The fact that it sounds as if it is is proof positive of just how simple it actually is. It’s the kind of question that a preschooler could most likely answer with ease. But it’s next to impossible for today’s state-of-the-art neural networks. This needs to change. And it needs to happen by reinventing artificial intelligence as we know it.

Read more