Skip to main content

Robot learns how to grab objects by analyzing them in simulated reality

DexNet 2.0: 99% Precision Grasping
Our hands are pretty great at picking up all manner of objects, while our brains are fine-tuned at working out exactly where and how to pick up an object most securely. That’s not easy for a robot, however. Faced with a world full of strange-shaped objects to pick up and manipulate, there’s no easy way of programming a robot to be able to know the precise grip it should employ to deal with every single object it might encounter.

That’s where researchers from the University of California, Berkeley come into play. They’ve developed a system called DexNet 2.0 that works out how to perform this task not by endlessly practicing in real life, but by analyzing the objects in virtual reality — courtesy of a deep learning neural network.

“We construct a probabilistic model of the physics of grasping, rather than assuming the robot knows the true state of the world,” Jeff Mahler, a postdoctoral researcher who worked on the project, told Digital Trends. “Specifically we model the robustness, or probability of achieving a successful grasp, given an observation of the environment. We use a large dataset of 1,500 virtual 3D models to generate 6.7 million synthetic point clouds and grasps across many possible objects. Then we can learn to predict the probability of success of grasps given a point cloud using deep learning. Deep learning allows us to learn this mapping across such as large and complex dataset.”

Image used with permission by copyright holder

The most obvious application for DexNet would be to improve robots used in warehousing or manufacturing by enabling them to cope with new components or other objects, and be able to manipulate them by packing them into boxes for shipping or performing assemblies. However, as Mahler points out, the technology could also help improve the capabilities of home robots — such as those that can clean up items or be used for assistive care, such as bringing items to elderly folks who can’t otherwise reach them.

There’s still more work to be done, though. “The big thrust of research in the next year is related to having the robot grasp for a particular use case,” Mahler said. “For example, orienting a bottle so it can be placed standing up or flipping legos over to plug them into other bricks.”

Other specifics on the agenda include the ability to grasp objects in clutter and reorienting objects for assembly. The team also plans to release the necessary code to let users generate their own training datasets and deploy the system on their own parallel-jaw robot. This will take place later in 2017.

“We have some interest in commercialization, but are primarily interested in furthering research on the subject in the next 6-12 months,” Mahler concluded.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more
4 simple pieces of tech that helped me run my first marathon
Garmin Forerunner 955 Solar displaying pace information.

The fitness world is littered with opportunities to buy tech aimed at enhancing your physical performance. No matter your sport of choice or personal goals, there's a deep rabbit hole you can go down. It'll cost plenty of money, but the gains can be marginal -- and can honestly just be a distraction from what you should actually be focused on. Running is certainly susceptible to this.

A few months ago, I ran my first-ever marathon. It was an incredible accomplishment I had no idea I'd ever be able to reach, and it's now going to be the first of many I run in my lifetime. And despite my deep-rooted history in tech, and the endless opportunities for being baited into gearing myself up with every last product to help me get through the marathon, I went with a rather simple approach.

Read more