Skip to main content

Some of the finest minds in AI descend upon London’s deep learning summit

Artificial intelligence has never been as present — or as cool — as it is today. And, after years on the periphery, deep learning has become the most successful and most popular machine learning method around.

DL algorithms can now identify objects better than most humans, outperform doctors at diagnosing diseases, and beat grandmasters at their own board game. In the last year alone, Google DeepMind’s AlphaGo defeated one of the world’s greatest Go player — a feat most experts guessed would take another decade at least.

Related Videos

Some of the finest minds in AI are at the Re•Work Deep Learning Summit in London this week to discuss the entrenched challenges and emerging solutions to artificial intelligence through deep learning. Researchers from Google, Apple, Microsoft, Oxford, and Cambridge (to name a few) are in attendance or giving talks. Re•Work founder, Nikita Johnson told Digital Trends, “Our events bring together a multidisciplinary mix of three core communities: startups, academia, and industry, to encourage collaboration and discussion.”

Over the next few weeks we’ll explore these topics in depth and hear from experts about how intelligent algorithms will transform our everyday lives tomorrow and in the years to come.

But what exactly is deep learning?

Deep learning is a machine learning method that trains systems by using large amounts of data and multiple layers of processing.

Still confused? You’re not alone.

“People often say, ‘You can’t understand deep learning really. It’s too abstract,’” Neil Lawrence, professor of Machine Learning and Computational Biology at the University of Sheffield, quipped today during his opening presentation. “But I think people can grasp it intuitively.”

To help laymen — and even some enthusiasts — grasp the concept of deep learning, Lawrence drew a parallel to a classic carnival game, in which a player drops a ball down a pegged board to land it in a slot at the bottom. It’s a difficult task to reach a specific slot — almost purely chance. But imagine you could remove pegs to help guide balls in certain directions to designated slots. That’s something like to the task performed by deep learning algorithms.

“The difficult aspect is adjusting the ‘pegs’ such that the ‘yeses’ go into the ‘yes’ slot and the ‘nos’ go into the ‘no’ slot,” Lawrence said.

Sounds simple? It’s not.

It’s a problem people have grappled with for decades and it’s still far from solved. Even today’s best deep learning systems can do one task well but fail when they’re asked to do anything even marginally different. As DeepMind’s Raia Hadsell pointed out, you can spend weeks or months training an algorithm to play an Atari game but that knowledge can’t be generalized. In other words, you can teach a system to play Pong but have to start from scratch if you want to it play Space Invaders.

There may be solutions — Hadsell thinks her team at DeepMind has at least one — but the shortcomings show just how much work researchers have ahead of them.

Editors' Recommendations

This groundbreaking new style of A.I. learns things in a totally different way
History of AI neural networks

With very rare exceptions, every major advance in artificial intelligence this century has been the result of machine learning. As its name implies (and counter to the symbolic A.I. that characterized much of the first half of the field’s history), machine learning involves smart systems that don’t just follow rules but actually, well, learn.

But there’s a problem. Unlike even a small human child, machine learning needs to be shown large numbers of training examples before it can successfully recognize them. There’s no such thing as, say, seeing an object like a “doofer” (you don’t know what it is, but we bet you would remember it if you saw one) and, thereafter, being able to recognize every subsequent doofer you see.

Read more
Deep-learning A.I. is helping archaeologists translate ancient tablets
DeepScribe project 1

Deep-learning artificial intelligence is helping grapple with plenty of problems in the modern world. But it also has its part to play in helping solve some ancient problems as well -- such as assisting in the translation of 2,500-year-old clay tablet documents from Persia's Achaemenid Empire.

These tablets, which were discovered in modern-day Iran in 1933, have been studied by scholars for decades. However, they’ve found the translation process for the tablets -- which number in the tens of thousands -- to be laborious and prone to errors. A.I. technology can help.

Read more
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more