In news to file under “What could possibly go wrong,” two U.S. deterrence experts have penned an article suggesting that it might be time to hand control of the launch button for America’s nuclear weapons over to artificial intelligence. You know, that thing which can mistake a 3D-printed turtle for a rifle!
In an article titled “America Needs a ‘Dead Hand,’” Dr. Adam Lowther and Curtis McGiffin suggest that “an automated strategic response system based on artificial intelligence” may be called for due to the speed with which a nuclear attack could be leveled against the United States. Specifically, they are worried about two weapons — hypersonic glide vehicles and hypersonic cruise missiles — which reduce response times to mere minutes from when an attack is launched until it strikes.
They acknowledge that such a suggestion is likely to “generate comparisons to Dr. Strangelove’s doomsday machine, War Games’ War Operation Plan Response, and The Terminator’s Skynet. But they also argue that “the prophetic imagery of these science fiction films is quickly becoming reality.” As a result of the compressed response time frame from modern weapons of war, the two experts think that an A.I. system “with predetermined response decisions, that detects, decides, and directs strategic forces” could be the way to go.
As with any nuclear deterrent, the idea is not to use such a system. Nuclear deterrents are based on the concept that adversaries know that the U.S. will detect and nuclear launch and answer with a devastating response. That threat should be enough to put them off. Using A.I. tools for this decision-making process would just update this idea for 2019.
But is it really something we should consider? The researchers stop short of suggesting such a thing should definitely be embraced. “Artificial intelligence is no panacea,” they write. “Its failures are numerous. And the fact that there is profound concern by well-respected experts in the field that science fiction may become reality, because artificial intelligence designers cannot control their creation, should not be dismissed.”
Still, the fact that this is even something feasible, both technologically and strategically, means it’s no longer quite as science fiction as many would like. And just when we thought the worst thing that could come out of Terminator was the increasingly terrible sequels…
- Analog A.I.? It sounds crazy, but it might be the future
- Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
- Emotion-sensing A.I. is here, and it could be in your next job interview
- Women with Byte: Vivienne Ming’s plan to solve ‘messy human problems’ with A.I.
- Why teaching robots to play hide-and-seek could be the key to next-gen A.I.