Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

A beginner’s guide to A.I. superintelligence and ‘the singularity’

Have you heard people talking about the technological singularity, either positively or negatively, but didn’t know enough to join in? Want to know if you should pack your bags and flee for the hills to escape the coming robot invasion? Or maybe join a church to welcome our new robot overlords? First, make sure to check out our beginner’s guide to all your singularity queries.

What exactly is the singularity?

The technological singularity, to use its full title, is a hypothesis predicted on the creation of artificial superintelligence. Unlike the “narrow” A.I. we have today — which can be extremely good at carrying out one task, but can’t function in as many domains as a more generalized intelligence such as our own — a superintelligence would possess abilities greater than our own. This would trigger a kind of tipping point in which enormous changes take place in human society.

With A.I., particularly deep learning neural networks, hitting new milestones on a seemingly daily basis, here in 2017 the idea doesn’t seem quite as science fiction as it once did.

Is this a new idea?

No. As is the case with a lot of A.I., these ideas have been circulating for awhile — even though it’s only relatively recently that fields like deep learning have started to break through into the mainstream. I.J. Good, a British mathematician who worked with Alan Turing as a cryptologist during World War II, first suggested the concept of an intelligence explosion back in 1965.

His common sense view was that, as computers become increasingly powerful, so too does their capacity for problem solving and coming up with inventive solutions to problems. Eventually, superintelligent machines will design even better machines, or simply rewrite themselves to become smarter. The results are a recursive self-improvement which is either very good, or very bad for humanity as a whole.

This idea was later picked up by Vernor Vinge, a sci-fi writer, mathematics professor and computer scientist. In a famous 1993 essay, “The Coming Technological Singularity” Vinge made the famous prediction that: “Within 30 years, we will have the technological means to create superhuman intelligence. Shortly after that, the human era will be ended.”

So Vinge was the guy who coined the term ‘singularity’ then?

Not really. Vinge may have popularized it, but the term “singularity” was first applied to computers by the mathematician John von Neumann, one of the most important figures in modern computing. Towards the end of the end of his life, in the 1950s, von Neumann was both fascinated and alarmed by “the ever-accelerating progress of technology and changes in the modes of human life, which gives the appearance of some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

When exactly is all this supposed to happen?

Thanks for asking the easy questions. As we’ve already mentioned, Vernor Vinge gave a 30-year time table back in 1993. That would place the singularity at some time before or during 2023. Futurist and singularity enthusiast Ray Kurzweil, meanwhile, has pegged 2029 as the date when A.I. will pass a valid Turing test and achieve human levels of intelligence. After this, he thinks the singularity will take place in 2045, at which point “we will multiply our effective intelligence a billionfold by merging with the intelligence we have created.”

Others have argued that both of these are ridiculously premature estimates, based on a faulty understanding of what comprises intelligence. And Terminator 2: Judgment Day, one of the greatest sci-fi movies of all time, placed the point at which computers become “self-aware” as being 2.14am Eastern on August 4, 1997. So who really knows?

Skynet, self-aware computers, and merging with our own machines…. I’m not really sure how to feel about any of this.

You, me, and everyone else. Imagining what the singularity would be like is a bit like trying to visualize a totally new color the world has never seen before. Remember that Aaron Sorkin-penned Mark Zuckerberg line from The Social Network about how, “If you guys were the inventors of Facebook, you’ d have invented Facebook?” Well, the same thing applies for non superintelligent beings like us trying to imagine how a superintelligence would view the world.

Superintelligence has the opportunity to solve all of our problems almost immediately. Or it could deem us an unnecessary risk and wipe us out in a moment. Or we could become its new pets, kept busy with whatever the human-entertaining equivalent of a cat’s laser toy might be.

Geez, you make it sound like the singularity is going to be some kind of godlike presence, dispensing a choice of wrath or salvation.

You’re not kidding! There’s definitely something religious about the zeal with which some people talk about the singularity. It’s almost like Silicon Valley’s answer to the rapture, in which we’re permanently unburdened of our status as the smartest guys and gals in the room by an all-seeing presence. Heck, there are even clouds (or, well, the cloud) involved in this heavenly scenario.

Case in point: Anthony Levandowski – the engineer who worked on Google’s autonomous car — has created a religious nonprofit which looks a whole lot like a church devoted to the worship of A.I..

Why don’t we just pull the plug right now?

Well, that would certainly be one option, just like we could solve poverty, financial inequality, wars and Justin Bieber by carrying out a complete 100 percent extermination of the human race. If all A.I. research was to stop right now, then the prospect of the singularity would certA.I.nly be averted. But who wants that? And who would enforce it? Right now, AI is helping improve life for billions of people. It’s also making a whole lot of money for the owners of companies like Google, Facebook, Apple, and others.

At present, we don’t yet have artificial superintelligence, and even the most impressive examples of A.I. are comparatively narrow in what they can achieve. Even if it is possible to one day replicate a true intelligence inside a computer, some people hope there will be ways to control it without it taking over. For example, one idea might be to keep a superintelligence in an isolated environment with no access to the internet. (Then again, the researcher Eliezer Yudkowsky thinks that, like attempts to keep Hannibal Lecter locked up, no superintelligent A.I. will be contained for long.)

Other proposals say that we’ll be alright so long as we program A.I. to behave in a way that’s good and moral. (But Nick Bostrom’s “paperclip maximizer” thought experiment pokes a few holes in that one, too.) It’s definitely a concern, though — which is something voiced by everyone from Elon Musk to Stephen Hawking.

Then again, if superintelligence turns out to be the greatest thing that ever happened to humanity, do we really want to stop it?

Is the singularity our only concern with A.I.?

Absolutely not. There are plenty of other concerns involving artificial intelligence that don’t involve superintelligence — with the impact of A.I. on employment and the use of A.I. and robots in warfare being just two. In other words, cheer up: there’s a whole lot more than the singularity to worry about when it comes to A.I.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
A.I. is getting scary good at generating fake humans. Watch this demo
japanese gan people dont exist screen shot 2019 05 03 at 16 23 06

【データグリッド】全身モデル自動生成AI | [DataGrid] Model generation AI

A.I. is getting pretty scarily good at lying to us. No, we’re not talking about wilfully misleading people for nefarious means, but rather creating sounds and images that appear real, but don't exist in the real world.

Read more
IBM helped NASA FDL fix a satellite’s instrument using cutting-edge A.I.
ai fix satellite nasa 1200px solar dynamics observatory 1

How do you fix a satellite's instrument that’s floating 22,000 miles above the Earth’s surface?

That’s a question NASA Frontier Development Lab (FDL) was asked to address when one of its space weather observing satellites ran into problems.  This satellite was the Solar Dynamics Observatory (SDO), which is used to study the Sun and its impact on space weather. This is important for all sorts of reasons — since solar storms can knock out GPS satellites, shut down electrical grids, and scramble communications.

Read more
This AI cloned my voice using just three minutes of audio
acapela group voice cloning ad

There's a scene in Mission Impossible 3 that you might recall. In it, our hero Ethan Hunt (Tom Cruise) tackles the movie's villain, holds him at gunpoint, and forces him to read a bizarre series of sentences aloud.

"The pleasure of Busby's company is what I most enjoy," he reluctantly reads. "He put a tack on Miss Yancy's chair, and she called him a horrible boy. At the end of the month, he was flinging two kittens across the width of the room ..."

Read more