Skip to main content

Revisiting the rise of A.I.: How far has artificial intelligence come since 2010?

2010 doesn’t seem all that long ago. Facebook was already a giant, time-consuming leviathan; smartphones and the iPad were a daily part of people’s lives; The Walking Dead was a big hit on televisions across America; and the most talked-about popular musical artists were the likes of Taylor Swift and Justin Bieber. So pretty much like life as we enter 2020, then? Perhaps in some ways.

One place that things most definitely have moved on in leaps and bounds, however, is on the artificial intelligence front. Over the past decade, A.I. has made some huge advances, both technically and in the public consciousness, that mark this out as one of the most important ten year stretches in the field’s history. What have been the biggest advances? Funny you should ask; I’ve just written a list on exactly that topic.

The span of time between 2010 to 2020 brought some of the most amazing technological advances the world has ever seen, so in the spirit of reflection, we’ve compiled a series of stories that take a look back at the previous decade through a variety of different lenses. Explore more of our Ten Years of Tech series.
ten years of tech tenyearsoftech 4

IBM Watson triumphs at Jeopardy!

Image used with permission by copyright holder

To most people, few things say “A.I. is here” quite like seeing an artificial intelligence defeat two champion Jeopardy! players on prime time television. That’s exactly what happened in 2011, when IBM’s Watson computer trounced Brad Rutter and Ken Jennings, the two highest-earning American game show contestants of all time at the popular quiz show.

It’s easy to dismiss attention-grabbing public displays of machine intelligence as being more about hype-driven spectacles than serious, objective demonstrations. What IBM had developed was seriously impressive, though. Unlike a game such as chess, which features rigid rules and a limited board, Jeopardy! is less easily predictable. Questions can be about anything and often involve complex wordplay, such as puns.

“I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy! was still decades away,” Jennings told me when I was writing my book Thinking Machines. “Or at least I thought that it was.” At the end of the game, Jennings scribbled a sentence on his answer board and held it up for the cameras. It read: “I for one welcome our new robot overlords.”

Here come the smart assistants

October 2011 is most widely remembered by Apple fans as the month in which company co-founder and CEO Steve Jobs passed away at the age of 56. However, it was also the month in which Apple unveiled its A.I. assistant Siri with the iPhone 4s.

The concept of an A.I. you could communicate with via spoken words had been dreamed about for decades. Former Apple CEO had, remarkably, predicted a Siri-style assistant back in the 1980s; getting the date of Siri right almost down to the month. But Siri was still a remarkable achievement. True, its initial implementation had some glaring weaknesses, and Apple arguably has never managed to offer a flawless smart assistant. Nonetheless, it introduced a new type of technology that was quickly pounced on for everything from Google Assistant to Microsoft’s Cortana to Samsung’s Bixby.

Of all the tech giant, Amazon has arguably done the most to advance the A.I. assistant in the years since. Its Alexa-powered Echo speakers have not only shown the potential of these A.I. assistants; they’ve demonstrated that they’re compelling enough to exist as standalone pieces of hardware. Today, voice-based assistants are so commonplace they barely even register. Ten years ago most people had never used one.

Deep Learning goes into overdrive

Deep learning neural networks are not wholly an invention of the 2010s. The basis for today’s artificial neural networks traces back to a 1943 paper by researchers Warren McCulloch and Walter Pitts. A lot of the theoretical work underpinning neural nets, such as the breakthrough backpropagation algorithm, were pioneered in the 1980s. Some of the advances that lead directly to modern deep learning were carried out in the first years if the 2000s with work like Geoff Hinton’s advances in unsupervised learning.

I’m happy to announce my lab is working on AI+Climate Change. Climate Change is one of humanity’s most pressing problems & the Tech community must help. I’m still exploring more projects. If you want to help, let me know here: https://t.co/GlauMT75TE #EarthDay

— Andrew Ng (@AndrewYNg) April 22, 2019

But the 2010s are the decade the technology went mainstream. In 2010, researchers George Dahl and Abdel-rahman Mohamed demonstrated that deep learning speech recognition tools could beat what were then the state-of-the-art industry approaches.  After that, the floodgates were opened. From image recognition (example: Jeff Dean and Andrew Ng’s famous paper on identifying cats) to machine translation, barely a week went by when the world wasn’t reminded just how powerful deep learning could be.

It wasn’t just a good PR campaign either, the way an unknown artist might finally stumble across fame and fortune after doing the same way in obscurity for decades. The 2010s are the decade in which the quantity of data exploded, making it possible to leverage deep learning in a way that simply wouldn’t have been possible at any previous point in history.

DeepMind blows our minds

Of all the companies doing amazing AI work, DeepMind deserves its own entry on this list. Founded in September 2010, most people hadn’t heard of deep learning company DeepMind until it was bought by Google for what seemed like a bonkers $500 million in January 2014. DeepMind has made up for it in the years since, though.

StarCraft II Image used with permission by copyright holder

Much of DeepMind’s most public-facing work has involved the development of game-playing AIs, capable of mastering computer games ranging from classic Atari titles like Breakout and Space Invaders (with the help of some handy reinforcement learning algorithms) to, more recently, attempts at StarCraft II and Quake III Arena.

Demonstrating the core tenet of machine learning, these game-playing A.I.s got better the more they played. In the process, they were able to form new strategies that, in some cases, even their human creators weren’t familiar with. All of this work helped set the stage for DeepMind’s biggest success of all…

Beating humans at Go

Google DeepMind Hanabi
DeepMind

As this list has already shown, there are no shortage of examples when it comes to A.I. beating human players at a variety of games. But Go, a Chinese board game in which the aim is to surround more territory than your opponent, was different. Unlike other games in which players could be beaten simply by number crunching faster than humans are capable of, in Go the total number of allowable board positions is mind-bogglingly staggering: far more than the total number of atoms in the universe. That makes brute force attempts to calculate answers virtually impossible, even using a supercomputer.

Nonetheless, DeepMind managed it. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19×19 board. The next year, 60 million people tuned in live to see the world’s greatest Go player, Lee Sedol, lose to AlphaGo. By the end of the series AlphaGo had beaten Sedol four games to one.

In November 2019, Sedol announced his intentions to retire as a professional Go player. He cited A.I. as the reason.“Even if I become the number one, there is an entity that cannot be defeated,” he said. Imagine if Lebron James announced he was quitting basketball because a robot was better at shooting hoops that he was. That’s the equivalent!

Cars that drive themselves

Image used with permission by copyright holder

In the first years of the twenty-first century, the idea of an autonomous car seemed like it would never move beyond science fiction. In MIT and Harvard economists Frank Levy and Richard Murnane’s 2004 book The New Division of Labor, driving a vehicle was described as a task too complex for machines to carry out. “Executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a driver’s behavior,” they wrote.

In 2010, Google officially unveiled its autonomous car program, now called Waymo. Over the decade that followed, dozens of other companies (including tech heavy hitters like Apple) have started to develop their own self-driving vehicles. Collectively these cars have driven thousands of miles on public roads; apparently proving less accident-prone than humans in the process.

Foolproof full autonomy is still a work-in-progress, but this was nonetheless one of the most visible demonstrations of A.I. in action during the 2010s.

The rise of generative adversarial networks

The dirty secret of much of today’s A.I. is that its core algorithms, the technologies that make it tick, were actually developed several decades ago. What’s changed is the processing power available to run these algorithms and the massive amounts of data they have to train on. Hearing about a wholly original approach to building A.I. tools is therefore surprisingly rare.

christie's auction a.i. painting
Timothy A. Clary/Getty Images

Generative adversarial networks certainly qualify. Often abbreviated to GANs, this class of machine learning system was invented by Ian Goodfellow and colleagues in 2014. No less an authority than A.I. expert Yann LeCun has described it as “the coolest idea in machine learning in the last twenty years.”

At least conceptually, the theory behind GANs is pretty straightforward: take two cutting edge artificial neural networks and pit them against one another. One network creates something, such as a generated image. The other network then attempts to work out which images are computer-generated and which are not. Over time, the generative adversarial process allows the “generator” network to become sufficiently good at creating images that they can successfully fool the “discriminator” network every time.

The power of Generative Adversarial Networks were seen most widely when a collective of artists used them to create original “paintings” developed by A.I. The result sold for a shockingly large amount of money at a Christie’s auction in 2018.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
A.I. researchers create a facial-recognition system for chimps
facial recognition chimpanzee selfie

From unlocking smartphones to spotting criminals in crowds, there’s no shortage of reminders that facial recognition technology has gotten pretty darn good. Among humans, that is. But now researchers from the U.K.’s University of Oxford and Japan's Kyoto University want to expand the tech’s capabilities -- by making a facial recognition system that works with chimpanzees, too.

The A.I. system showcased an impressive recognition accuracy level of 92.5%. It was able to correctly identify a chimp’s sex 96.2% of the time. In a competition against humans, the system was 84% accurate when asked to identify chimps in 100 random still images. Humans managed precisely half as well, guessing the chimps' identity correctly just 42% of the time. But the real improvement was time. Humans took 55 minutes to complete the task, whereas machines took just 30 seconds.

Read more
The house appraiser of the future is probably an A.I. algorithm
use infrared thermometer easily spot heat leaks house thermal view

To paraphrase 1984’s The Terminator, the artificial intelligence developed by residential real estate company HouseCanary will absolutely not stop, ever, until your house is properly valued.

The robot appraiser takes advantage of new federal regulations raising the threshold value of homes exempt from human evaluation. In doing so, it’s allowed to do a job that once required a flesh-and-blood human to perform. Namely, it will determine the current value of a property by inspecting its exterior condition and amenities.

Read more
Deepfake-hunting A.I. could help strike back against the threat of fake news
Kim Kardashian Deepfake Interview Image

Sylvester Stallone deepfake (replacing Arnold Schwarzenegger in Terminator 2: Judgement Day) Ctrl Shift Face/Youtube

Of all the A.I. tools to have emerged in recent years, very few have generated as much concern as deepfakes. A combination of “deep learning” and “fake,” deepfake technology allows anyone to create images or videos in which photographic elements are convincingly superimposed onto other pictures. While some of the ways this tech has been showcased have been for entertainment (think superimposing Sylvester Stallone’s face onto Arnie’s body in Terminator 2), other use-cases have been more alarming. Deepfakes make possible everything from traumatizing and reputation-ruining “revenge porn” to misleading fake news.

Read more