Skip to main content

This groundbreaking new style of A.I. learns things in a totally different way

With very rare exceptions, every major advance in artificial intelligence this century has been the result of machine learning. As its name implies (and counter to the symbolic A.I. that characterized much of the first half of the field’s history), machine learning involves smart systems that don’t just follow rules but actually, well, learn.

But there’s a problem. Unlike even a small human child, machine learning needs to be shown large numbers of training examples before it can successfully recognize them. There’s no such thing as, say, seeing an object like a “doofer” (you don’t know what it is, but we bet you would remember it if you saw one) and, thereafter, being able to recognize every subsequent doofer you see.

If A.I. is going to live up to its potential, it’s important that it can learn this way. While the problem has yet to be solved, a new research paper from the University of Waterloo in Ontario describes a potential breakthrough process called LO-shot (or less-than-one shot) learning. This could enable machines to learn far more rapidly in the manner of humans. That would be useful for a wide range of reasons, but particularly scenarios in which large amounts of data do not exist for training.

The promise of less-than-one shot learning

“Our LO-shot learning paper theoretically explores the smallest possible number of samples that are needed to train machine learning models,” Ilia Sucholutsky, a Ph.D. student working on the project, told Digital Trends. “We found that models can actually learn to recognize more classes than the number of training examples they are given. We initially noticed this result empirically when working on our previous paper on soft-label dataset distillation, a method for generating tiny synthetic datasets that train models to the same performance as if they were trained on the original dataset. We found that we could train neural nets to recognize all 10 digits — zero to nine — after being trained on just five synthetic examples, less than one per digit. … We were really surprised by this, and it’s what led to us working on this LO-shot learning paper to try and theoretically understand what was going on.”

Sucholutsky stressed that this is still the early stages. The new paper shows that LO-shot learning is possible. The researchers must now develop the algorithms required to perform LO-shot learning. In the meantime, he said the team has received interest from researchers in areas as diverse as volcanology, medical imaging, and cybersecurity — all of whom could benefit from this kind of A.I. learning.

“I’m hoping that we’ll be able to start rolling out these new tools really soon, but I encourage other machine learning researchers to also start exploring this direction to speed that process up,” Sucholutsky said.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Nvidia is renting out its AI Superpod platform for $90K a month
nvidia hgx 2 supercomputer server nvidiadgx2

Nvidia is looking to make work and development in artificial intelligence more accessible, giving researchers an easy way to access its DGX supercomputer. The company announced that it will launch a subscription service for its DGX Superpod as an affordable way to gain entry into the world of supercomputers.

The DGX SuperPod is capable of 100 petaflops of AI performance, according to the company, and when configured 20 DGX A100 systems, it's designed for large-scale AI projects.

Read more
Google shows off its amazing new Quantum A.I. Campus

Google is looking to the future with its work on quantum computing, next-generation computer architecture that abides by the rules of quantum, rather than classical, mechanics. This allows for the possibility of unimaginable densities of information to be both stored and manipulated, opening up some game-changing possibilities for the future of computing as we know it.

At Tuesday’s Google I/O event, the search giant announced its new Quantum A.I. Campus, a Santa Barbara, California, facility which will advance Google’s (apparently considerable) quantum ambitions. The campus includes Google’s inaugural quantum data center, quantum hardware research laboratories, and quantum processor chip fabrication facilities.

Read more