Skip to main content

Can AI smart enough to play poker be weaponized without turning Terminator?

Last month, some of the world’s best Texas Hold’em poker players gathered at the Rivers Casino in Pittsburgh to take on an unusual opponent. Over the course of 20 days and 120,000 hands, they were utterly outmatched by an artificial intelligence known as Libratus.

This isn’t the first time an AI has beaten humans in a test of wits, and it won’t be the last. Last year, Google’s DeepMind beat champion Go player Lee Sedol in a high-profile series, and there are plans to teach AIs how to play Starcraft II.

However, these AIs aren’t being developed just to beat human players at games. The same groundwork that helps a computer excel at poker can be applied to all kinds of different scenarios. Right now, we’re seeing the capabilities of AIs that can think three moves ahead of their opponent — and soon, systems like these could be arbitrating matters of life and death.

Imperfect Information

Shortly after Libratus saw off its competition at the Rivers Casino, its creator, Carnegie Mellon professor Tuomas Sandholm, was interviewed about the project by Time. When asked about potential applications for the AI, he reeled off a list of “high stakes” possibilities including business negotiations, cybersecurity, and military strategy planning.

Image used with permission by copyright holder

Libratus hit the headlines because of its ability to play poker, but it’s capable of much more than that. Sandholm didn’t spend twelve years of his life working on the project to spot his friends’ bluffs when game night rolls around.

The real strength of Libratus is its capacity to figure out scenarios where information is either imperfect or incomplete. This is what sets the AI apart from the DeepMind implementation that beat Lee Sedol in Go last year. Unlike Go, a game where all information about the game state is known, Libratus had to contend with Poker, a game that revolves around incomplete information. The AI couldn’t know what cards other players had in their respective hands, and had to play around that restriction.

Sandholm described heads-up, no-limit Texas Hold’em as the “last frontier” among games that have been subjected to significant AI research. The fact that Libratus was so successful against high-level human players represents a benchmark for the problem-solving capacity of AIs working with imperfect information.

Tough Poker Player: Brains Vs. AI Update

It’s no secret that AIs are getting smarter — exhibitions like last month’s high-stakes poker game are intended to publicize the most recent advances. AI has long been a touchstone for cutting-edge technology, and now there’s plenty of easily digestible evidence that points to how advanced work in this field has become. Now, we’re seeing the financial industry and the medical industry speak on how they can make these advances work for them, and they’re not alone.

The United States military is already deep in the process of establishing the best way to implement this kind of technology on the battlefield. It’s not a case of ‘if’; it’s a case of ‘how’.

Lieutenant Libratus

As it stands, the U.S. military is embroiled in a fierce discussion as to how best to use AI to wage war. Opinion is split between using the technology to aid and assist human operatives, and allowing for the creation of autonomous AI-controlled entities.

Libratus hit the headlines because it can play poker, but it’s capable of much more than that.

It’s easy to see why some are eager to pursue AI-controlled forces. On the surface, it’s a straightforward way of diminishing human casualties in combat operations. However, this type of technology must be seized from Pandora’s Box. Once it’s available to some, it’s quickly going to be adopted by all.

Whether you trust any country’s government to utilize AI-controlled forces ethically, it seems plainly obvious that allowing these weapons of war out into the open would result in heinous acts of a magnitude that we can’t even comprehend.

However, there’s also an argument to be made that someone, somewhere will implement this technology eventually. Ignoring advances for ethical reasons is perhaps naïve, if the results are going to end up in enemy hands regardless.

This dispute has come to be known as the Terminator conundrum, a turn of phrase that’s been used on several occasions by Paul J. Selva, the acting Vice Chairman of the Joint Chiefs of Staff.

“I don’t think it’s impossible that somebody will try to build a completely autonomous system,” said General Selva at a Military Strategy Forum held at the Center for Strategic and International Studies in August 2016. “And I’m not talking about something like a cruise missile or a smart torpedo or a mine, that requires a human to target it and release it, and it goes and finds its target. I’m talking about a wholly robotic system that decides whether or not — at the point of decision — it’s going to do lethal harm.”

Selva argued that it’s important that a set of conventions is established to govern this emerging form of warfare. He acknowledges that these rules will need to be iterated upon, and that there will always be entities that disregard any regulation — but without a baseline for fair usage, all bets are off.

It won’t be long before simple AI is used in warfare.

Many experts would agree that AI hasn’t yet reached the stage of sophistication required for ethical use in military operations. However, it won’t be long before simple AI can be used in warfare, even if the implementation is clumsy.

Without rules in place, there’s no way to differentiate between ethical usage, and clumsy usage. Establishing guidelines might require a dip into Pandora’s box, but you could argue that the alternative amounts to leaving the box wide open.

Advanced warfare requires advanced ethics

After Libratus dominated its opposition in Texas Hold’em, Sandholm told Time that before the contest, he thought that the AI had a “50-50 chance” to win. It doesn’t take one of the world’s best poker players to recognize those aren’t great odds.

Sandholm is likely playing up his self-doubt for the sake of the interview, but it certainly seems that he wasn’t completely confident that Libratus had victory within its grasp. That’s fine when the stakes are limited to his reputation, and the reputation of the university he represents. However, when talking about using AI on the battlefield, a 50-50 chance that everything goes to plan isn’t anywhere near good enough.

Libratus is an amazing accomplishment in the field of AI, but it’s also a reminder of how much work there is still to be done. The “imperfect information” that can impact the way a game of Texas Hold’em plays out is limited to the 52 cards in a standard deck; in combat operations, there are countless other known and unknown variables that come into play.

Once military implementation of AI becomes commonplace, it will be too late to start regulating its usage. It’s fortunate that there’s still work to be done before today’s leading AI is competent enough to answer to a commanding officer, because there’s plenty of legislative groundwork to be laid before that kind of practice can be considered ethically acceptable.

Once the military implementation of AI becomes commonplace, it will be too late to start regulating.

During the Military Strategy Forum mentioned earlier, General Selva noted that experts thought the creation of a wholly autonomous machine soldier was around a decade away. It’s perhaps relevant that when DeepMind beat Lee Sedol last year, the accomplishment came a decade earlier than expected, according to a report from MIT’s Technology Review.

Research into AI is progressing at a rate that’s surprising even to experts working in the field, and that’s great news. However, there’s a marked difference between useful progress, and technology that’s ready to do the job when lives are on the line.

Military implementation of AI will become a reality, and it’ll probably happen sooner than we expect. Now is the time to put guidelines in place, so we don’t run the risk of seeing these technologies abused once they’re advanced enough to be put in the line of fire.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
Amazon’s AI agent will make it even easier for you to part with your money
Amazon Nova Act performing task in a web browser.

The next big thing in the field of artificial intelligence is Agentic AI, which is essentially an AI tool that can automate certain multi-step processes for users. For example, interacting with a web browser for tasks like booking tickets or ordering groceries. 

Amazon certainly sees a future in there. After giving a massive overhaul to Alexa and introducing a new Alexa+ assistant, the company has today announced a new AI agent called Nova Act. Amazon says Nova Act is designed to “complete tasks in a web browser.” Amazon won’t be the first to reach this milestone, as few other AI companies have already attempted this vision. 

Read more
Opera One puts an AI in control of browser tabs, and it’s pretty smart
AI tab manager in Opera One browser.

Opera One browser has lately won a lot of plaudits for its slick implementation of useful AI features, a clean design, and a healthy bunch of chat integrations. Now, it is putting AI in command of your browser tabs, and in a good way.
The new feature is called AI Tab Commands, and it essentially allows users to handle their tabs using natural language commands. All you need to do is summon the onboard Aria AI assistant, and it will handle the rest like an obedient AI butler.
The overarching idea is to let the AI handle multiple tabs, and not just one. For example, you can ask it to “group all Wikipedia tabs together,” “close all the Smithsonian tabs,” “or shut down the inactive tabs.”

A meaningful AI for web browsing
Handling tabs is a chore in any web browser, and if internet research is part of your daily job, you know the drill. Having to manually move around tabs using a mix of cursor and keyboard shorcuts, naming them, and checking through the entire list of tabs is a tedious task.
Meet Opera Tab Commands: manage your tabs with simple prompts
Deploying an AI do it locally — and using only natural language commands — is a lovely convenience and one of the nicest implementations of AI I’ve seen lately. Interestingly, Opera is also working on a futuristic AI agent that will get browser-based work done using only text prompts.
Coming back to the AI-driven tab management, the entire process unfolds locally, and no data is sent to servers, which is a neat assurance. “When using Tab Commands and asking Aria to e.g. organize their tabs, the AI only sends to the server the prompt a user provides (e.g., “close all my YouTube tabs”) – nothing else,” says the company.
To summon the AI Tab manager, users can hit the Ctrl + slash(/) shortcut, or the Command + Slash combo for macOS. It can also be invoked with a right-click on the tabs, as long as there are five or more currently running in a window.
https://x.com/opera/status/1904822529254183166?s=61
Aside from closing or grouping tabs, the AI Tab Commands can also be used to pin tabs. It can also accept exception commands, such as “close all tabs except the YouTube tabs.” Notably, this feature is also making its way to Opera Air and the gaming-focused Opera GX browser, as well.
Talking about grouping together related tabs, Opera has a neat system called tab islands, instead of color-coded tab groups at the top, as is the case with Chrome or Safari. Opera’s implementation looks better and works really well.
Notably, the AI Tab Commands window also comes with an undo shortcut, for scenarios where you want to revert the actions, like reviving a bunch of closed tabs. Opera One is now available to download on Windows and macOS devices. Opera also offers Air, a browser than puts some zen into your daily workflow.

Read more
Microsoft 365 Copilot gets an AI Researcher that everyone will love
Researcher agent in action inside Microsoft 365 Copilot app.

Microsoft is late to the party, but it is finally bringing a deep research tool of its own to the Microsoft 365 Copilot platform across the web, mobile, and desktop. Unlike competitors such as Google Gemini, Perplexity, or OpenAI’s ChatGPT, all of which use the Deep Research name, Microsoft is going with the Researcher agent branding.
The overarching idea, however, isn’t too different. You tell the Copilot AI to come up with thoroughly researched material on a certain topic or create an action plan, and it will oblige by producing a detailed document that would otherwise take hours of human research and compilation. It’s all about performing complex, multi-step research on your behalf as an autonomous AI agent.
Just to avoid any confusion early on, Microsoft 365 Copilot is essentially the rebranded version of the erstwhile Microsoft 365 (Office) app. It is different from the standalone Copilot app, which is more like a general purpose AI chatbot application.
Researcher: A reasoning agent in Microsoft 365 Copilot
How Researcher agent works?
Underneath the Researcher agent, however, is OpenAI’s Deep Research model. But this is not a simple rip-off. Instead, the feature’s implementation in Microsoft 365 Copilot runs far deeper than the competition. That’s primarily because it can look at your own material, or a business’ internal data, as well.
Instead of pulling information solely from the internet, the Researcher agent can also take a look at internal documents such as emails, chats, internal meeting logs, calendars, transcripts, and shared documents. It can also reference data from external sources such as Salesforce, as well as other custom agents that are in use at a company.
“Researcher’s intelligence to reason and connect the dots leads to magical moments,” claims Microsoft. Researcher agent can be configured by users to reference data from the web, local files, meeting recordings, emails, chats, and sales agent, on an individual basis — all of them, or just a select few.

Why it stands out?

Read more