Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Computers will soon outsmart us. Does that make an A.I. rebellion inevitable?

The question, “Will Computers Revolt?” is really many different questions rolled into one. Will computers become the dominant intelligence on the planet and will they take our place? What does being “dominant” mean? Will computers and humans be in conflict? Will that conflict be violent? Will intelligent computers take jobs and resources from humans?

Will Computers Revolt? Preparing for the Future of Artificial Intelligence By Charles J Simon Image used with permission by copyright holder

Most AI experts agree that computers will eventually exceed humans in thinking ability.  But then, even more questions arise. When will it happen? What would it be like to ‘exceed humans in thinking ability’? Will computer intelligence be just like human intelligence—only faster? Or will it be radically different?

Although today’s AI systems have remarkable abilities, they are not “thinking” in any general sense of the word.  Accordingly, we now use the terms AGI (Artificial General Intelligence), Strong AI, True AI, and others to differentiate the idea of true thinking from today’s AI systems which have tremendous capabilities but more limited scope.

With the coming of AGI, many new risks will emerge but before exploring these, let’s consider how far in the future this is likely to happen.

When Will AGI Happen?

Sooner than you think!  Why don’t we already have AGI? Two issues hold us back:

  1. Creating the computational power needed for AGI
  2. Knowing what software to write for AGI

AI experts have come up with differing estimates of the computational power of the human brain and predictions of increasing computational power of CPUs. The lines eventually cross at a “singularity” (coined by Ray Kurzweil) with CPUs exceeding brains in terms of brute-force computation in ten years, or twenty, or half a century, depending on the underlying assumptions.

But this may be the wrong question. We all know that lightning-fast searches on a properly-indexed database can produce results a million- or billion-fold faster than the brute-force approach. What portion of AGI will be amenable to this type of software efficiency?

A Double Standard?


As computers become the world’s dominant thinkers, we humans should heed these lessons and try not to be the cause of conflict. We won’t be a valuable food or energy source for the computers and (hopefully) we won’t be trophies. But what if the computers perceive that we are a serious risk to them? Or simply an inconvenience? This could be a result of human overpopulation, ongoing wars, global warming, pollution, dwindling fossil fuels—all the same problems which we can see we need to solve whether or not there is a risk of antagonizing our silicon counterparts.

Consider the steps taken to reduce the Chinese population. While many believe that the rules imposed by the Chinese government on its people were draconian, they were accepted by many as necessary at the time. If identical rules were imposed on the human race as a whole by a future race of thinking computers, they could well be considered equivalent to genocide.

Consider also the possibility of an acute energy crisis. If some future government makes energy rationing decisions which result in the deaths of many people, these would certainly be considered very “hard choices.” If thinking computers made identical choices, these could be considered acts of war—especially if thinking machines always arranged sufficient energy for themselves (just as a human government would).

I contend that it would be best for us to address these human problems ourselves rather than awaiting solutions from AGIs whose values may not coincide with our own. When faced with the prospect of solving these global problems ourselves or having machines implement solutions for us (potentially much more unpleasant solutions), we can only hope that the human race will rise to the occasion. In the event that concern about AGI drives us to solve these problems, we could think of them as having a positive impact on the planet.

Boston Dynamics’ robots already exhibit the fluid motion and coordination which we humans get from our cerebellum’s 56 billion neurons—65% of our brain’s computational capacity. And robots accomplish this with a few CPUs—not because the CPUs exceed the computational power of 56 billion neurons but because designers of robotic software know about physics, forces, and feedback and can write software more efficiently than the trial-and-error/learning approach used by your brain.

The nut of the argument is that brains aren’t very efficient computational devices—they get the job done but there are better/faster ways to approach AGI software which developers can use. We may already have computers with enough power for AGI but we don’t know it yet — which brings us to the second question.

Most people look at the limitations of today’s AI systems as evidence that AGI is a long way off.  I beg to differ. AI has most of AGI’s needed pieces already in play, they just don’t work together very well—yet. While the Jeopardy!-playing Watson is an amazing achievement, it is unrealistic to expect that it would ever manifest “understanding” or common sense at a human level. You understand coffee because you’ve seen it, poured it, spilled it, scalded yourself with it, and on and on. Watson has “read” about coffee. You and Watson could not have equivalent understanding of coffee (or anything else) because Watson hasn’t had an equivalent real-world experience. For true understanding, Watson-scale abilities need to be married to sensory and interactive robotic systems in order for common sense to emerge. We’ll need to incorporate object and knowledge representation, pattern-recognition, goal-oriented learning, and other aspects of AI in order to achieve AGI. These pieces already exist in various forms and AGI might all come together in as little as ten years—much sooner than most think.

With this shortened timeframe in mind, it’s time for serious thinking about what an AGI system might be like, what concerns we should have, and how we should prepare ourselves. We humans will necessarily lose our position as “biggest thinker” on the planet, but we have full control over the types of machines which will take over that position. We also have control over the process—be it peaceful or otherwise.

Ken Jennings and Brad Rutter compete against ‘Watson’ Image used with permission by copyright holder

Scenario 1: The “Peaceful Coexistence” Scenario

This is the first of four possible scenarios considering the conflicts which might arise between computers and humans.  It is useful to consider the questions of “What causes conflicts amongst humans?” and “Will these causes of conflict also exist between computers and people?”

Thinking machines will be interested in their own energy sources, their own “reproductive” factories, and their own ability to progress in their own direction.

Most human conflicts are caused by instinctive human needs and concerns. If one “tribe” (country, clan, religion) is not getting the resources or expansion which it needs (deserves, wants, can get) it may be willing to go to war with its neighboring tribe to get them. Within the tribe each individual needs to establish a personal status in the “pecking order” and is willing to compete to establish a better position. We are all concerned about providing for ourselves, our mates and our families and are often willing to sacrifice short-term comfort for the long-term future of ourselves and our offspring, even if this creates conflict today.

These sources of conflict among humans seem inappropriate as sources of conflict with machines. Thinking machines won’t be interested in our food, our mates, or our standard of living. They will be interested in their own energy sources, their own “reproductive” factories, and their own ability to progress in their own direction. To the extent that resources or “pecking order” are sources of conflict, thinking machines are more likely to compete amongst each other than they are to compete against the human population.

Sophia, a robot created by Dr. David Hanson, founder and CEO of Hanson Robotics.

In the long term, following this scenario, mankind’s problems will be brought under control via computerized decisions. AGI computers will arrange solutions for overpopulation, famine, disease, and war, and these issues will become obsolete. Computers will help us initially because that will be their basic programming and later because they will see that it is in their own interest to have a stable, peaceful human population. Computers will manage all the technology, exploration, and advancement.

Scenario 2: The “Mad-Machine” Scenario

There is a popular science fiction scenario of a machine which becomes self-aware and attacks its creators when they threaten to disconnect it. This isn’t a realistic scenario for several reasons. Humans come into conflict because we are territorial, possessive, greedy and a host of other reasons which would not be valuable to an AGI.  Even our innate self-preservation instincts are not necessary for an AGI. We will strive to make AGIs which are pleasant, entertaining, agreeable, we won’t be able to sell them otherwise. And even when AGIs begin to program their own future AGI generations they will pass on these traits…just as we try to pass our own values to our children.

Universal Pictures

But let’s consider some conflicts between humans and other species. Gorillas are approaching extinction as they are hunted as trophies. Rhinos as an aphrodisiac. Wolves were hunted because they were “pests”. At the other end of the life-form size spectrum, the smallpox virus is virtually extinct (and we are proud of this accomplishment) because it was a serious risk to human life. We need to take steps to ensure that we aren’t trophies, pests, or parasites.

A Rogue Computer?

Couldn’t we Just Turn it Off?


The common fictional scenario is that we should “pull the plug” on some aberrant machine. Consider instead that the thinking part of a robot or other AGI isn’t on your desktop but is in the cloud. AGIs will be running in server farms in remote locations distributed across numerous servers. They will initially be built to take advantage of the existing server infrastructure and this infrastructure has be designed with reliability and redundancy in mind. Without a specific “Off” switch programmed in, it could be quite difficult to defeat all the safeguards which were designed keep our financial and other systems running through any calamity. While an “Off” switch seems like a good idea, we can only hope that it will be a programming priority.

Adapted from the book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence. By Charles J Simon, Available on Amazon, Oct 30, 2018.

But suppose there is a machine which misbehaves? Whether such a machine occurs by accident or by nefarious human intent (see below), such systems would be also dangerous to other AGIs. Accordingly, AGIs will be motivated to eliminate such systems. With the cooperation of the machine population, such individual machines can be weeded out of the environment and the prospect of such elimination would act as a deterrent against such behavior.

Would AGIs start a nuclear war? In this case, the interests of people and AGIs are the same—a full-scale war would be disastrous for all. To look for the really dangerous situations, we need to consider instances where the objectives of humans and AGIs diverge. Issues of disease, famine, and drought have a devastating impact on the human populations while AGIs might just not care.

If thinking machines begin building their own civilization, individual misbehaving machines will be a greater threat to their civilization than to ours. Just as we take steps to remove criminals from our society, future machines will likewise eliminate their own—and they will be able to do it faster and more effectively than any human vs. machine conflict would.

Scenario 3: The “Mad-Man” Scenario

What if the first owners of powerful AGI systems use them as tools to “take over the world”? What if an individual despot gets control of an AGI system?

This is a more dangerous scenario than the previous. We will be able to program the motivations of our AGIs but we can’t control the motivations of the people or corporations that initially create them. Will such systems be considered tools to create immense profits or to gain political control? While science fiction usually presents pictures of armed conflict, I believe that the greater threat comes from our computers’ ability to sway opinion or manipulate markets. We have already seen efforts to sway elections through social media and AGI systems will make this vastly more effective. We already have markets at the mercy of programmed trading—AGI will amplify this issue as well.

We will be able to program the motivations of our AGIs but we can’t control the motivations of the people or corporations that initially create them.

The good news is that the window of opportunity for such a concern is fairly short, only within the first few AGI generations. During that period, people will have direct control over AGIs and they will do our bidding. Once AGI advances beyond this phase, they will be measuring their actions against their own common good. When faced with demands from humans to perform some activity with a long-term downside, properly-programmed AGIs will simply refuse.

Scenario 4: The “Mad-Mankind” Scenario

Today, we humans are the dominant intelligence and many of us are not comfortable with the idea of that dominance slipping away. Will we rise up as a species and attempt to overthrow the machines? Will individual “freedom fighters” attack the machines? Perhaps.

Art from Simon Stålenhag’s Electric State Simon Stålenhag

Historically, leaders have been able to convince populations that their problems are caused by some other group—Jews, Blacks, Illegal immigrants, etc.—and convince the population to take steps to eliminate the “cause” of their problems. Such a process may take place with AGI and robots as well: “We’re losing jobs!” “They are taking over!” “I don’t want my daughter to marry one!” But the rising tide of technology will improve the lives of people too, and few of us would be willing to turn back the clock.

Will there be individuals who attempt to subvert computers? Of course—just as there are today with hackers and virus-writers. In the long term their efforts are troublesome but generally futile. The people who own or control the computers will respond (as those in power do today) and the computers themselves will be “inconvenienced”. Eventually, the rebels will move on to other targets and leave the indestructible computer intelligence alone.

Conclusion

So will computers revolt? Yes, in the sense that they will become the dominant intelligence on our planet—the technological juggernaut is already under way. It is also likely that if we do not solve our multiple pending calamities (overpopulation, pollution, global warming, dwindling resources), thinking machines will solve them for us with actions which could appear warlike but are actually the direct consequences of our own inaction. As in the quip from Neil deGrasse Tyson: “Time to behave, so when Artificial Intelligence becomes our overlord, we’ve reduced the reasons for it to exterminate us all.”

All the preceding scenarios are predicated on the implementation of appropriate safeguards. I expect groups such as the Future of Life Institute to be vocal and effective in directing AGI development into safer territory. I am not advocating that everything will be rosy, full speed ahead. But with an understanding of how AGI will work, we can predict future pitfalls and it will be possible to avoid them.

This article was adapted from the book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence. By Charles J Simon, Available on Amazon, Oct 30, 2018.

Editors' Recommendations

Charles J. Simon
Charles J. Simon, BSEE, MSCS, a uniquely qualified, nationally-recognized computer software/hardware expert and neural…
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more
A.I. teaching assistants could help fill the gaps created by virtual classrooms
AI in education kid with robot

There didn’t seem to be anything strange about the new teaching assistant, Jill Watson, who messaged students about assignments and due dates in professor Ashok Goel’s artificial intelligence class at the Georgia Institute of Technology. Her responses were brief but informative, and it wasn’t until the semester ended that the students learned Jill wasn’t actually a “she” at all, let alone a human being. Jill was a chatbot, built by Goel to help lighten the load on his eight other human TAs.

"We thought that if an A.I. TA would automatically answer routine questions that typically have crisp answers, then the (human) teaching staff could engage the students on the more open-ended questions," Goel told Digital Trends. "It is only later that we became motivated by the goal of building human-like A.I. TAs so that the students cannot easily tell the difference between human and A.I. TAs. Now we are interested in building A.I. TAs that enhance student engagement, retention, performance, and learning."

Read more