With very rare exceptions, every major advance in artificial intelligence this century has been the result of machine learning. As its name implies (and counter to the symbolic A.I. that characterized much of the first half of the field’s history), machine learning involves smart systems that don’t just follow rules but actually, well, learn.
But there’s a problem. Unlike even a small human child, machine learning needs to be shown large numbers of training examples before it can successfully recognize them. There’s no such thing as, say, seeing an object like a “doofer” (you don’t know what it is, but we bet you would remember it if you saw one) and, thereafter, being able to recognize every subsequent doofer you see.
If A.I. is going to live up to its potential, it’s important that it can learn this way. While the problem has yet to be solved, a new research paper from the University of Waterloo in Ontario describes a potential breakthrough process called LO-shot (or less-than-one shot) learning. This could enable machines to learn far more rapidly in the manner of humans. That would be useful for a wide range of reasons, but particularly scenarios in which large amounts of data do not exist for training.
The promise of less-than-one shot learning
“Our LO-shot learning paper theoretically explores the smallest possible number of samples that are needed to train machine learning models,” Ilia Sucholutsky, a Ph.D. student working on the project, told Digital Trends. “We found that models can actually learn to recognize more classes than the number of training examples they are given. We initially noticed this result empirically when working on our previous paper on soft-label dataset distillation, a method for generating tiny synthetic datasets that train models to the same performance as if they were trained on the original dataset. We found that we could train neural nets to recognize all 10 digits — zero to nine — after being trained on just five synthetic examples, less than one per digit. … We were really surprised by this, and it’s what led to us working on this LO-shot learning paper to try and theoretically understand what was going on.”
Sucholutsky stressed that this is still the early stages. The new paper shows that LO-shot learning is possible. The researchers must now develop the algorithms required to perform LO-shot learning. In the meantime, he said the team has received interest from researchers in areas as diverse as volcanology, medical imaging, and cybersecurity — all of whom could benefit from this kind of A.I. learning.
“I’m hoping that we’ll be able to start rolling out these new tools really soon, but I encourage other machine learning researchers to also start exploring this direction to speed that process up,” Sucholutsky said.
- Nvidia’s supercomputer may bring on a new era of ChatGPT
- Google Smart Canvas gets deeper integration between apps
- Nvidia’s new voice A.I. sounds just like a real person
- Facial recognition tech for bears aims to keep humans safe
- Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad