Skip to main content

Deep learning vs. machine learning: What’s the difference between the two?

deep learning vs machine explained ai 01
Image used with permission by copyright holder
In recent months, Microsoft, Google, Apple, Facebook, and other entities have declared that we no longer live in a mobile-first world. Instead, it’s an artificial intelligence-first world where digital assistants and other services will be your primary source of information and getting tasks done. Your typical smartphone or PC are now your secondary go-getters.

Backing this new frontier are two terms you’ll likely hear often: machine learning and deep learning. These are two methods in “teaching” artificial intelligence to perform tasks, but their uses goes way beyond creating smart assistants. What’s the difference? Here’s a quick breakdown.

Computers now see, hear, and speak

With the help of machine learning, computers can now be “trained” to predict the weather, determine stock market outcomes, understand your shopping habits, control robots in a factory, and so on. Google, Amazon, Facebook, Netflix, LinkedIn, and more popular consumer-facing services are all backed by machine learning. But at the heart of all this learning is what’s known as an algorithm.

Simply put, an algorithm is not a complete computer program (a set of instructions), but a limited sequence of steps to solve a single problem. For example, a search engine relies on an algorithm that grabs the text you enter into the search field box, and searches the connected database to provide the related search results. It takes specific steps to achieve a single, specific goal.

Machine learning has actually been around since 1956. Arthur Samuel didn’t want to write a highly-detailed, lengthy program that could enable a computer to beat him in a game of checkers. Instead, he created an algorithm that enabled the computer to play against itself thousands of times so it could “learn” how to perform as a stand-alone opponent. By 1962, this computer beat the Connecticut state champion.

Thus, at its core, machine learning is based on trial and error. We can’t manually write a program by hand that can help a self-driving car distinguish a pedestrian from a tree or a vehicle, but we can create an algorithm for a program that can solve this problem using data. Algorithms can also be created to help programs predict the path of a hurricane, diagnose Alzheimer’s early, determine the world’s most overpaid and underpaid soccer stars, and so on.

Machine learning typically runs on low-end devices, and breaks a problem down into parts. Each part is solved in order, and then combined to create a single answer to the problem. Well-known machine learning contributor Tom Mitchell of Carnegie Mellon University explains that computer programs are “learning” from experience if their performance of a specific task is improving. Machine learning algorithms are essentially enabling programs to make predictions, and over time get better at these predictions based on trial and error experience.

Here are the four main types of machine learning:

Supervised machine learning

In this scenario, you are providing a computer program with labeled data. For instance, if the assigned task is to separate pictures of boys and girls using an algorithm for sorting images, those with a male child would have a “boy” label, and images with a female child would have a “girl” label. This is considered as a “training” dataset, and the labels remain in place until the program can successfully sort the images at an acceptable rate.

Semi-supervised machine learning

In this case, only a few images are labeled. The computer program will then use an algorithm to make its best guess regarding the unlabeled images, and then the data is fed back to the program as training data. A new batch of images is then provided, with only a few sporting labels. It’s a repetitive process until the program can distinguish between boys and girls at an acceptable rate.

Unsupervised machine learning

This type of machine learning doesn’t involve labels whatsoever. Instead, the program is blindly thrown into the task of splitting images of boys and girls into two groups using one of two methods. One algorithm is called “clustering” that groups similar objects together based on characteristics, such as hair length, jaw size, eye placement, and so on. The other algorithm is called “association” where the program creates if/then rules based on similarities it discovers. In other words, it determines a common pattern between the images, and sorts them accordingly.

Reinforcement machine learning

Chess would be an excellent example of this type of algorithm. The program knows the rules of the game and how to play, and goes through the steps to complete the round. The only information provided to the program is whether it won or lost the match. It continues to replay the game, keeping track of its successful moves, until it finally wins a match.

Now it’s time to move on to a deeper subject: deep learning.

Deep Learning

Deep learning is basically machine learning on a “deeper” level (pun unavoidable, sorry). It’s inspired by how the human brain works, but requires high-end machines with discrete add-in graphics cards capable of crunching numbers, and enormous amounts of “big” data. Small amounts of data actually yield lower performance.

Unlike standard machine learning algorithms that break problems down into parts and solves them individually, deep learning solves the problem from end to end. Better yet, the more data and time you feed a deep learning algorithm, the better it gets at solving a task.

In our examples for machine learning, we used images consisting of boys and girls. The program used algorithms to sort these images mostly based on spoon-fed data. But with deep learning, data isn’t provided for the program to use. Instead, it scans all pixels within an image to discover edges that can be used to distinguish between a boy and a girl. After that, it will put edges and shapes into a ranked order of possible importance to determine the two genders.

On an even more simplified level, machine learning will distinguish between a square and triangle based on information provided by humans: squares have four points, and triangles have three. With deep learning, the program doesn’t start out with pre-fed information. Instead, it uses an algorithm to determine how many lines the shapes have, if those lines are connected, and if they are perpendicular. Naturally, the algorithm would eventually figure out that an inserted circle does not fit in with its square and triangle sorting.

Again, this latter “deep thinking” process requires more hardware to process the big data generated by the algorithm. These machines tend to reside in large datacenters to create an artificial neural network to handle all the big data generated and supplied to artificial intelligent applications. Programs using deep learning algorithms also take longer to train because they’re learning on their own instead of relying on hand-fed shortcuts.

“Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon,” writes Nvidia’s Michael Copeland. “With Deep Learning’s help, A.I. may even get to that science fiction state we’ve so long imagined.”

Is Skynet on the way? Not yet

A great recent example of deep learning is translation. This technology is capable of listening to a presenter talking in English, and translating his words into a different language through both text and an electronic voice in real time. This achievement was a slow learning burn over the years due to the differences in overall language, language use, voice pitches, and maturing hardware-based capabilities.

Deep learning is also responsible for conversation-carrying chatbots, Amazon Alexa, Microsoft Cortana, Facebook, Instagram, and more. On social media, algorithms based on deep learning are what cough up contact and page suggestions. Deep learning even helps companies customize their creepy advertising to your tastes even when you’re not on their site. Yay for technology.

“Looking to the future, the next big step will be for the very concept of the ‘device’ to fade away,” says Google CEO Sundar Pichai. “Over time, the computer itself—whatever its form factor—will be an intelligent assistant helping you through your day. We will move from mobile first to an A.I. first world.”

Editors' Recommendations

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
Photorealistic A.I. tool can fill in gaps in images, including faces
pen net image infilling screen shot 2019 07 16 at 05 19 53

You only need to go check out the latest Hollywood blockbuster or pick up a new AAA game title to be reminded that computer graphics can be used to create some dazzling otherworldly images when called for. But some of the most impressive examples of machine-generated images aren’t necessarily alien landscapes or giant monsters, they’re image modifications that we don’t even notice.

That’s the case with a new A.I. demonstration created by computer scientists in China. A collaboration between Sun Yat-sen University in Guangzhou and Beijing’s Microsoft Research lab, they’ve developed a smart artificial intelligence which can be used to accurately fill in blank areas in an image: Whether that’s a missing face or the front of a building.

Read more
A.I. can spot galaxy clusters millions of light-years away
ai identify galaxy clusters 205077 web 1

Image showing the galaxy cluster Abell1689. The novel deep learning tool Deep-CEE has been developed to speed up the process of finding galaxy clusters such as this one, and takes inspiration in its approach from the pioneer of galaxy cluster finding, George Abell, who manually searched thousands of photographic plates in the 1950s. NASA/ESA

Galaxy clusters are enormous structures of hundreds or even thousands of galaxies which move together, and for many years they were some of the largest known structures in the universe (until superclusters were discovered). But despite their massive size, they can be hard to identify because they are so very far away from us.

Read more
What’s that liquid? IBM’s flavor-identifying ‘e-tongue’ will tell you
ibm e tongue project 48055705116 bdaf7b65de o

IBM Hypertaste: An AI-assisted e-tongue for fast and portable fingerprinting of complex liquids

With its Watson technology, IBM has helped create a pretty convincing artificial brain. But now it’s seemingly ready to move onto other body parts as well -- and it’s settled on the tongue as a next step. As developed by computer scientists at IBM Research, the A.I.-assisted e-tongue is a portable device, equipped with special sensors, that allow it to taste and identify different liquids.

Read more