Skip to main content

Japanese researchers use deep learning A.I. to get driftwood robots moving

walk

Did you ever make sculptures out of found objects like driftwood? Researchers at the University of Tokyo have taken this same idea and applied it to robots. In doing so, they’ve figured out a way to take everyday natural objects like pieces of wood and get deep reinforcement learning algorithms to figure out how to make them move. Using just a few basic servos, they’ve opened up a whole new way of building robots — and it’s pretty darn awesome.

“[In our work, we wanted to] consider the use of found objects in robotics,” the researchers write in a paper describing their work. “Here, these are branches of various shapes. Such objects have been used in art or architecture, but [are] not normally considered as robotic materials. [However,] when the robot is trained towards the goal of efficient locomotion, these parts adopt new meaning: hopping legs, dragging arms, spinning hips, or yet unnamed creative mechanisms of propulsion. Importantly, these learned strategies, and thus the meanings we might assign to such found object parts, are a product of optimization and not known prior to learning.”

Azumi Maekawa/University of Tokyo

Deep reinforcement learning is useful for applications where the A.I. needs to figure out strategies for itself through trial and error. Famously, this approach to artificial intelligence was used to develop DeepMind’s A.I., which learned to play classic Atari games using just the game’s on-screen data and knowledge of its controls. In this latest driftwood example, the robot figures out the optimal way to bring its wooden limbs to virtual life by using reinforcement learning technology to test out different types of locomotion. The result involves movements that don’t necessarily replicate real-life animal movements (to be fair, there aren’t a whole lot of stick-like living creatures to model movement on!), but that are nonetheless efficient.

In a masterstroke, the researchers arranged for this training to be done in simulation. Among other things, this allows for a large number of failed movement attempts without having to worry about destroying the physical robot in the process. In order to carry out these simulations accurately, though, the researchers first have to 3D scan in the sticks and enter their respective weights so that the gaits can be calculated correctly.

While it’s likely that roboticists will continue to build many robots from the ground up, this is still a great reminder that, with the right software, literally anything can be a robot — even a pile of sticks.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Deep-learning A.I. is helping archaeologists translate ancient tablets
DeepScribe project 1

Deep-learning artificial intelligence is helping grapple with plenty of problems in the modern world. But it also has its part to play in helping solve some ancient problems as well -- such as assisting in the translation of 2,500-year-old clay tablet documents from Persia's Achaemenid Empire.

These tablets, which were discovered in modern-day Iran in 1933, have been studied by scholars for decades. However, they’ve found the translation process for the tablets -- which number in the tens of thousands -- to be laborious and prone to errors. A.I. technology can help.

Read more
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more
Robot fry cook Flippy is getting a makeover to make it even more useful
flippy is getting a makeover flippyroar image

Robot fry cook Flippy is getting a makeover. The burger-flipping robot developed by Miso Robotics is a robot arm equipped with both thermal and regular vision, which grills burgers to order while advising its human collaborators on when they need to add cheese or prep buns for serving. It can cook fries, too. Flippy has been around for a few years now, although Miso believes their new iteration arriving this year will make it a more helpful addition to kitchens.

The big change is how the robot is installed. Rather than being planted on the kitchen floor, an already cramped environment in many kitchens, Flippy now attaches under the hood above a fry station.

Read more