Skip to main content

Can AI smart enough to play poker be weaponized without turning Terminator?

artificial intelligence could be a boon to the military army unmanned head
ARMY
Last month, some of the world’s best Texas Hold’em poker players gathered at the Rivers Casino in Pittsburgh to take on an unusual opponent. Over the course of 20 days and 120,000 hands, they were utterly outmatched by an artificial intelligence known as Libratus.

This isn’t the first time an AI has beaten humans in a test of wits, and it won’t be the last. Last year, Google’s DeepMind beat champion Go player Lee Sedol in a high-profile series, and there are plans to teach AIs how to play Starcraft II.

However, these AIs aren’t being developed just to beat human players at games. The same groundwork that helps a computer excel at poker can be applied to all kinds of different scenarios. Right now, we’re seeing the capabilities of AIs that can think three moves ahead of their opponent — and soon, systems like these could be arbitrating matters of life and death.

Imperfect Information

Shortly after Libratus saw off its competition at the Rivers Casino, its creator, Carnegie Mellon professor Tuomas Sandholm, was interviewed about the project by Time. When asked about potential applications for the AI, he reeled off a list of “high stakes” possibilities including business negotiations, cybersecurity, and military strategy planning.

Image used with permission by copyright holder

Libratus hit the headlines because of its ability to play poker, but it’s capable of much more than that. Sandholm didn’t spend twelve years of his life working on the project to spot his friends’ bluffs when game night rolls around.

The real strength of Libratus is its capacity to figure out scenarios where information is either imperfect or incomplete. This is what sets the AI apart from the DeepMind implementation that beat Lee Sedol in Go last year. Unlike Go, a game where all information about the game state is known, Libratus had to contend with Poker, a game that revolves around incomplete information. The AI couldn’t know what cards other players had in their respective hands, and had to play around that restriction.

Sandholm described heads-up, no-limit Texas Hold’em as the “last frontier” among games that have been subjected to significant AI research. The fact that Libratus was so successful against high-level human players represents a benchmark for the problem-solving capacity of AIs working with imperfect information.

Tough Poker Player: Brains Vs. AI Update

It’s no secret that AIs are getting smarter — exhibitions like last month’s high-stakes poker game are intended to publicize the most recent advances. AI has long been a touchstone for cutting-edge technology, and now there’s plenty of easily digestible evidence that points to how advanced work in this field has become. Now, we’re seeing the financial industry and the medical industry speak on how they can make these advances work for them, and they’re not alone.

The United States military is already deep in the process of establishing the best way to implement this kind of technology on the battlefield. It’s not a case of ‘if’; it’s a case of ‘how’.

Lieutenant Libratus

As it stands, the U.S. military is embroiled in a fierce discussion as to how best to use AI to wage war. Opinion is split between using the technology to aid and assist human operatives, and allowing for the creation of autonomous AI-controlled entities.

Libratus hit the headlines because it can play poker, but it’s capable of much more than that.

It’s easy to see why some are eager to pursue AI-controlled forces. On the surface, it’s a straightforward way of diminishing human casualties in combat operations. However, this type of technology must be seized from Pandora’s Box. Once it’s available to some, it’s quickly going to be adopted by all.

Whether you trust any country’s government to utilize AI-controlled forces ethically, it seems plainly obvious that allowing these weapons of war out into the open would result in heinous acts of a magnitude that we can’t even comprehend.

However, there’s also an argument to be made that someone, somewhere will implement this technology eventually. Ignoring advances for ethical reasons is perhaps naïve, if the results are going to end up in enemy hands regardless.

This dispute has come to be known as the Terminator conundrum, a turn of phrase that’s been used on several occasions by Paul J. Selva, the acting Vice Chairman of the Joint Chiefs of Staff.

“I don’t think it’s impossible that somebody will try to build a completely autonomous system,” said General Selva at a Military Strategy Forum held at the Center for Strategic and International Studies in August 2016. “And I’m not talking about something like a cruise missile or a smart torpedo or a mine, that requires a human to target it and release it, and it goes and finds its target. I’m talking about a wholly robotic system that decides whether or not — at the point of decision — it’s going to do lethal harm.”

Selva argued that it’s important that a set of conventions is established to govern this emerging form of warfare. He acknowledges that these rules will need to be iterated upon, and that there will always be entities that disregard any regulation — but without a baseline for fair usage, all bets are off.

It won’t be long before simple AI is used in warfare.

Many experts would agree that AI hasn’t yet reached the stage of sophistication required for ethical use in military operations. However, it won’t be long before simple AI can be used in warfare, even if the implementation is clumsy.

Without rules in place, there’s no way to differentiate between ethical usage, and clumsy usage. Establishing guidelines might require a dip into Pandora’s box, but you could argue that the alternative amounts to leaving the box wide open.

Advanced warfare requires advanced ethics

After Libratus dominated its opposition in Texas Hold’em, Sandholm told Time that before the contest, he thought that the AI had a “50-50 chance” to win. It doesn’t take one of the world’s best poker players to recognize those aren’t great odds.

Sandholm is likely playing up his self-doubt for the sake of the interview, but it certainly seems that he wasn’t completely confident that Libratus had victory within its grasp. That’s fine when the stakes are limited to his reputation, and the reputation of the university he represents. However, when talking about using AI on the battlefield, a 50-50 chance that everything goes to plan isn’t anywhere near good enough.

Libratus is an amazing accomplishment in the field of AI, but it’s also a reminder of how much work there is still to be done. The “imperfect information” that can impact the way a game of Texas Hold’em plays out is limited to the 52 cards in a standard deck; in combat operations, there are countless other known and unknown variables that come into play.

Once military implementation of AI becomes commonplace, it will be too late to start regulating its usage. It’s fortunate that there’s still work to be done before today’s leading AI is competent enough to answer to a commanding officer, because there’s plenty of legislative groundwork to be laid before that kind of practice can be considered ethically acceptable.

Once the military implementation of AI becomes commonplace, it will be too late to start regulating.

During the Military Strategy Forum mentioned earlier, General Selva noted that experts thought the creation of a wholly autonomous machine soldier was around a decade away. It’s perhaps relevant that when DeepMind beat Lee Sedol last year, the accomplishment came a decade earlier than expected, according to a report from MIT’s Technology Review.

Research into AI is progressing at a rate that’s surprising even to experts working in the field, and that’s great news. However, there’s a marked difference between useful progress, and technology that’s ready to do the job when lives are on the line.

Military implementation of AI will become a reality, and it’ll probably happen sooner than we expect. Now is the time to put guidelines in place, so we don’t run the risk of seeing these technologies abused once they’re advanced enough to be put in the line of fire.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
Watch Google DeepMind’s robotic ping-pong player take on humans
Google DeepMind's robot ping pong player takes on a human.

Demonstrations - Achieving human level competitive robot table tennis

Ping-pong seems to be the sport of choice when it comes to tech firms showcasing their robotic wares. Japanese firm Omron, for example, made headlines several years ago with its ping-pong robot that could comfortably sustain a rally with a human player, while showing off the firm’s sensor and control technology in the process.

Read more
Apple says it made ‘AI for the rest of us’ — and it’s right
An Apple executive giving a presentation at WWDC 2024.

After many months of anxious waiting and salacious rumors, it’s finally happened: Apple has revealed its generative artificial intelligence (AI) systems to the world at its Worldwide Developers Conference (WWDC).

Yet unlike ChatGPT and Google Gemini, the freshly unveiled Apple Intelligence tools and features look like someone actually took the time to think about how AI can be used to better the world, not burn it down. If it works half as well as Apple promises it will, it could be the best AI system on the market.

Read more
The Mac just became a true ‘AI PC’
Disney Plus on a MacBook Pro.

Apple has unveiled a significant overhaul of its macOS operating system at its Worldwide Developers Conference (WWDC). The move -- long an expected topic for WWDC -- infuses the Mac with artificial intelligence (AI) across multiple apps, tools, and systems, revamping almost the entire Mac experience in the process. Put together, it has the potential to transform the Mac into an AI PC of the highest order.

Dubbed Apple Intelligence, the new system works across a host of apps -- including third-party ones -- to take them up a level. For example, Apple unveiled tools that can summarize or rewrite text in apps, such as rephrasing an email response for a new context. Apple also showcased some generative AI capabilities similar to those found in rival products like like Midjourney. Apple's spin, though, is that its system has more contextual knowledge. You can ask it to create an image of a friend for their birthday and it will take a photo of them that you have tagged and redesign it in one of several styles. In this case, Apple Intelligence knows who your friend is without you needing to specify a photo first.

Read more