Skip to main content

Pinterest Labs aims to tackle the most challenging problems in AI

pinterest labs ai research engineering
Image used with permission by copyright holder
What happens when you add machine learning to a database of 100 billion image-rich objects and ideas? Pinterest is already scratching the surface of a potential answer to that question with its AI-powered tools, including visual search and Pinterest Lens, but now it wants to dig deeper.

The company wants to join the ranks of industry giants Facebook and Google in accelerating the growth of artificial intelligence through open research and collaboration. To help it achieve that goal, it is launching a new group — dubbed “Pinterest Labs” — comprised of machine learning experts, whose investigations could help transform the way users discover ideas.

In the words of Pinterest chief scientist and Stanford associate professor Jure Leskovec: “As much as we’ve done, we still have far to go — most of Pinterest hasn’t been built yet.”

By working with the research community and universities — such as the Berkeley Artificial Intelligence Research Lab, University of California San Diego, and Stanford University  — Pinterest Labs is hoping to build the AI systems for some of its integral features. These include the “taste graph,” the technique used by the company to map the connections between pins, people, and boards in order to surface relevant ideas for users. The company is also hoping machine learning can help it to provide personalized recommendations faster.

Those interested in its work can keep up with the research group on its dedicated website, and by attending its public tech talks, the first of which took place on Tuesday at the company’s headquarters in San Francisco. Pinterest Labs will also share its findings with the academic community by publishing research papers and releasing its data to researchers.

Leskovec claims that Pinterest’s systems now rank more than 300 billion objects per day. In the last year, the platform has increased the number of recommendations it serves by 200 percent, while making them 30 percent more engaging.

Pinterest took a big leap in to machine learning with the launch of its Pinterest Lens tool at the start of this month. The machine learning system that powers Lens can recognize objects in photos, along with identifying their features, such as color, allowing users to snap images with their smartphone camera in order to discover and purchase related items on Pinterest.

Editors' Recommendations

Saqib Shah
Former Digital Trends Contributor
Saqib Shah is a Twitter addict and film fan with an obsessive interest in pop culture trends. In his spare time he can be…
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
Eternity

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more
To build a lifelike robotic hand, we first have to build a better robotic brain
Robot arm gripper

Our hands are like a bridge between the intentions laid out by the brain and the physical world, carrying out our wishes by letting us turn thoughts into actions. If robots are going to truly live up to their potential when it comes to interaction, it’s crucial that they therefore have some similar instrument at their disposal.

We know that roboticists are building some astonishingly intricate robot hands already. But they also need the smarts to control them -- being capable of properly gripping objects both according to their shape and their hardness or softness. You don’t want your future robot co-worker to crush your hand into gory mush when it shakes hands with you on its first day in the office.

Read more