Skip to main content

MIT’s new robot can play everyone’s favorite block-stacking game, Jenga

MIT Robot Learns How to Play Jenga

Not content with getting freakishly good at cerebral games like chess and Go, it seems that artificial intelligence is now coming for the kind of fun games we played as kids (and childish adults). With that in mind, researchers from the Massachusetts Institute of Technology (MIT) have developed a robot which uses the latest machine learning computer vision to play everyone’s favorite tower-toppling game Jenga.

If it’s been a while since you played Jenga, the game revolves around a wooden tower constructed from 54 blocks. Players take it in turns to remove one block from the tower and place it on top of the stack. Over time, the tower gets taller and, crucially, more unstable. The result is a game of impressive physical skill for humans — and, now, for robots as well.

MIT’s Jenga-playing bot is equipped with a soft-pronged gripper, force-sensing wrist cuff, and external camera, which it uses to perceive the block-based tower in front of it. When it pushes against a block, the robot takes visual and tactile feedback data from the camera and cuff, and weighs these up against its previous experiences playing the game. Over time, it figures out when to keep pushing and when to try a new block in order to stop the Jenga tower from falling.

“Playing the game of Jenga … requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces,” Alberto Rodriguez, assistant professor in the Department of Mechanical Engineering at MIT, said in a statement. “It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks. This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”

On face value, the idea of a robot whose only mission is to play Jenga doesn’t sound like it has much real-world applicability. But the concept of a robot that can learn about the physical world, both from visual cues and tactile interactions, has immense applicability. Who knew a Jenga-playing robot could be so versatile?

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
This tech was science fiction 20 years ago. Now it’s reality
Hyundai Wearable Exoskeleton, assistive tech

Twenty years really isn’t all that long. A couple of decades ago, kids were reading Harry Potter books, Pixar movies were all the rage, and Microsoft’s Xbox and Sony’s PlayStation were battling it out for video game supremacy. That doesn’t sound all that different from 2021.

But technology has come a long way in that time. Not only is today’s tech far more powerful than it was 20 years ago, but a lot of the gadgets we thought of as science fiction have become part of our lives. Heck, in some cases, this technology has become so ubiquitous that we don’t even think about it as being cutting-edge tech.

Read more
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more