Skip to main content

Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.

Neural networks? Machine learning? Here's your secret decoder for A.I. buzzwords

machine learning
Image used with permission by copyright holder

A.I. is everywhere at the moment, and it’s responsible for everything from the virtual assistants on our smartphones to the self-driving cars soon to be filling our roads to the cutting-edge image recognition systems reported on by yours truly.

Unless you’ve been living under a rock for the past decade, there’s good a chance you’ve heard of it before — and probably even used it. Right now, artificial intelligence is to Silicon Valley what One Direction is to 13-year-old girls: an omnipresent source of obsession to throw all your cash at, while daydreaming about getting married whenever Harry Styles is finally ready to settle down. (Okay, so we’re still working on the analogy!)

But what exactly is A.I.? — and can terms like “machine learning,” “artificial neural networks,” “artificial intelligence” and “Zayn Malik” (we’re still working on that analogy…) be used interchangeably?

To help you make sense of some of the buzzwords and jargon you’ll hear when people talk about A.I., we put together this simple guide help you wrap your head around all the different flavors of artificial intelligence — If only so that you don’t make any faux pas when the machines finally take over.

Artificial intelligence

We won’t delve too deeply into the history of A.I. here, but the important thing to note is that artificial intelligence is the tree that all the following terms are all branches on. For example, reinforcement learning is a type of machine learning, which is a subfield of artificial intelligence. However, artificial intelligence isn’t (necessarily) reinforcement learning. Got it?

So far, no-one has built a general intelligence.

There’s no official consensus agreement on what A.I. means (some people suggest it’s simply cool things computers can’t do yet), but most would agree that it’s about making computers perform actions which would be considered intelligent were they to be carried out by a person.

The term was first coined in 1956, at a summer workshop at Dartmouth College in New Hampshire. The big current distinction in A.I. is between current domain-specific Narrow A.I. and Artificial General Intelligence. So far, no-one has built a general intelligence. Once they do, all bets are off…

Symbolic A.I.

You don’t hear so much about Symbolic A.I. today. Also referred to as Good Old Fashioned A.I., Symbolic A.I. is built around logical steps which can be given to a computer in a top-down manner. It entails providing lots and lots of rules to a computer (or a robot) on how it should deal with a specific scenario.

Selmer Bringsjord
Selmer Bringsjord Image used with permission by copyright holder

This led to a lot of early breakthroughs, but it turned out that these worked very well in labs, in which every variable could be perfectly controlled, but often less well in the messiness of everyday life. As one writer quipped about Symbolic A.I., early A.I. systems were a little bit like the god of the Old Testament — with plenty of rules, but no mercy.

Today, researchers like Selmer Bringsjord are fighting to bring back a focus on logic-based Symbolic A.I., built around the superiority of logical systems which can be understood by their creators.

Machine Learning

If you hear about a big A.I. breakthrough these days, chances are that unless a big noise is made to suggest otherwise, you’re hearing about machine learning. As its name implies, machine learning is about making machines that, well, learn.

Like the heading of A.I., machine learning also has multiple subcategories, but what they all have in common is the statistics-focused ability to take data and apply algorithms to it in order to gain knowledge.

There are a plethora of different branches of machine learning, but the one you’ll probably hear the most about is…

Neural Networks

If you’ve spent any time in our Cool Tech section, you’ve probably heard about artificial neural networks. As brain-inspired systems designed to replicate the way that humans learn, neural networks modify their own code to find the link between input and output — or cause and effect — in situations where this relationship is complex or unclear.

Artificial neural networks have benefited from the arrival of deep learning.

The concept of artificial neural networks actually dates back to the 1940s, but it was really only in the past few decades when it started to truly live up to its potential: aided by the arrival of algorithms like “backpropagation,” which allows neural network to adjust their hidden layers of neurons in situations where the outcome doesn’t match what the creator is hoping for. (For instance, a network designed to recognize dogs, which misidentifies a cat.)

This decade, artificial neural networks have benefited from the arrival of deep learning, in which different layers of the network extract different features until it can recognize what it is looking for.

Within the neural network heading, there are different models of potential network — with feedforward and convolutional networks likely to be the ones you should mention if you get stuck next to a Google engineer at a dinner party.

Reinforcement Learning

Reinforcement learning is another flavor of machine learning. It’s heavily inspired by behaviorist psychology, and is based around the idea that software agent can learn to take actions in an environment in order to maximize a reward.

As an example, back in 2015 Google’s DeepMind released a paper showing how it had trained an A.I. to play classic video games, with no instruction other than the on-screen score and the approximately 30,000 pixels that made up each frame. Told to maximize its score, reinforcement learning meant that the software agent gradually learned to play the game through trial and error.

MarI/O - Machine Learning for Video Games

Unlike an expert system, reinforcement learning doesn’t need a human expert to tell it how to maximize a score. Instead, it figures it out over time. In some cases, the rules it is learning may be fixed (as with playing a classic Atari game.) In others, it keeps adapting as time goes by.

Evolutionary Algorithms

Known as a generic population-based metaheuristic optimization algorithm if you’ve not been formerly introduced yet, evolutionary algorithms are another type of machine learning; designed to mimic the concept of natural selection inside a computer.

The process begins with a programmer inputting the goals he or she is trying to achieve with their algorithm. For example, NASA has used evolutionary algorithms to design satellite components. In that case, the function may be to come up with a solution capable of fitting in a 10cm x 10cm box, capable of radiating a spherical or hemispherical pattern, and able to operate at a certain Wi-Fi band.

The algorithm then comes up with multiple generations of iterative designs, testing each one against the stated goals. When one eventually ticks all the right boxes, it ceases. In addition to helping NASA design satellites, evolutionary algorithms are a favorite of creatives using artificial intelligence for their work: such as the designers of this nifty furniture.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Japanese researchers use deep learning A.I. to get driftwood robots moving
driftwood ai robots move mzmzmjaymw

walk

Did you ever make sculptures out of found objects like driftwood? Researchers at the University of Tokyo have taken this same idea and applied it to robots. In doing so, they’ve figured out a way to take everyday natural objects like pieces of wood and get deep reinforcement learning algorithms to figure out how to make them move. Using just a few basic servos, they’ve opened up a whole new way of building robots -- and it’s pretty darn awesome.

Read more
A.I. can spot galaxy clusters millions of light-years away
ai identify galaxy clusters 205077 web 1

Image showing the galaxy cluster Abell1689. The novel deep learning tool Deep-CEE has been developed to speed up the process of finding galaxy clusters such as this one, and takes inspiration in its approach from the pioneer of galaxy cluster finding, George Abell, who manually searched thousands of photographic plates in the 1950s. NASA/ESA

Galaxy clusters are enormous structures of hundreds or even thousands of galaxies which move together, and for many years they were some of the largest known structures in the universe (until superclusters were discovered). But despite their massive size, they can be hard to identify because they are so very far away from us.

Read more
Animal-A.I. Olympics will test bots against the latest animal intelligence tests
animal ai olympics challenge animalolympics getty 1

A few months back, we wrote about the Animal-A.I. Olympics, a then in-development competition which aimed to test top artificial intelligence agents by putting them through cognition tests designed for animals. This was intended to be open to anyone who wanted to create an A.I. they thought would be able to pass a battery of tests, all meant to measure some aspect of bot intelligence. Jump forward to the present day, and the contest has officially launched -- with its creators releasing Version 1.0 of the test environment, and announcing the official rules for entrants, increased prize money, and other crucial information.

“For prizes, we now have $32,000 equivalent value, with $20,000 total in cash and travel reimbursement for the top three entries and the most biologically plausible entry,” Matthew Crosby, a postdoctoral A.I. researcher working on the project, told Digital Trends. “We are also giving out $10,000 worth of AWS credits half-way through -- $500 to each of the top 20 entries -- that can be used during the second half of the competition.”

Read more