Skip to main content

Adversarial robots use game theory to improve at grabbing objects

adversarial robot grabbing objects gjgzbuslaiu3le0tfarv
Image used with permission by copyright holder
Whether it was your favorite toy or the last portion of mashed potatoes, anyone who grew up with a sibling knows that you learn to forcefully stake your claim to what’s rightfully yours.

It turns out that a similar idea can be applied to robots.

In a new piece of research — presented at the recent 2017 International Conference on Robotics and Automation (ICRA) — engineers from Google and Carnegie Mellon University demonstrated that robots learn to grasp objects more robustly if another robot can be made to try and snatch it away from them while they’re doing so.

When one robot is given the task of picking up an object, the researchers made its evil twin (not that they used those words exactly) attempt to grab it from them. If the object isn’t properly held, the rival robot would be successful in its snatch-and-grab effort. Over time, the first robot learns to more securely hold onto its object — and with a vastly accelerated learning time, compared to working this out on its own.

Robot Adversaries for Grasp Learning

“Robustness is a challenging problem for robotics,” Lerrel Pinto, a PhD student at Carnegie Mellon’s Robotics Institute told Digital Trends. “You ideally want a robot to be able to transfer what it has learnt to environments that it hasn’t seen before, or even be stable to risks in the environment. Our adversarial formulation allows the robot to learn to adapt to adversaries, and this could allow the robot to work in new environments.”

The work uses deep learning technology, as well as insights from game theory: the mathematical study of conflict and cooperation, in which one party’s gain can mean the other party’s loss. In this case, a successful grab from the rival robot is recorded as a failure for the robot it grabbed the object from — which triggers a learning experience for the loser. Over time, the robots’ tussles make each of them smarter.

That sounds like progress — just as long as the robots don’t eventually form a truce and target us with their adversarial AI, we guess!

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more
A Star Trek fan deepfaked Next Generation-era Data into the new Picard series
Data Picard

Star Trek Picard: Fixing Data's Face with Deepfake

Brent Spiner reprised his role as Lt. Cmdr. Data for the 2020 CBS All Access series Star Trek: Picard, and while it was certainly a nice touch to see Spiner play the iconic synthetic life form for the first time in years, there was no getting around the fact that the character didn’t look entirely like the Mr. Data fans remember from his Star Trek: The Next Generation heyday.

Read more