Skip to main content

Good at StarCraft? DARPA wants to train military robots with your brain waves

Douglas Levere, University at Buffalo

The 1984 movie The Last Starfighter tells the story of a teenager whose calling in life seems to be nothing more than to play arcade games. Fortunately, he’s spectacularly good at it. The game he’s best at is a video game called, as the movie’s title would have it, Starfighter. In it, the player must defend their homestead, The Frontier, from the perils of Xur and the Ko-Dan Armada by way of a series of wireframe laser battles.

But there’s a twist. It turns out that Starfighter isn’t simply a game; it’s actually a kind of test. The war with Xur and the Ko-Dan Armada is real, and the arcade game — with its demands on rapid-fire reaction times on the part of players — is a stealth recruiting tool, intended to seek out the best of the best to become genuine starfighters.

More than 35 years after The Last Starfighter hit theaters, engineers from the University at Buffalo, New York, Artificial Intelligence Institute have received funding from DARPA, the U.S. Defense Advanced Research Projects Agency, to carry out research that’s… well, let’s just say that it’s extremely similar. They have built a real-time strategy game, currently unnamed, that’s reminiscent of existing games like StarCraft or Stellaris in style. In this game, players must use resources to build units and defeat enemies; manipulating large numbers of agents on-screen to complete their mission objectives.

But this isn’t any ordinary gaming experience. When people play the University at Buffalo’s new strategy game, they first have to agree to be hooked up to electroencephalogram (EEG) technology so that the game’s designers can record their brain activity. As they play, their eye movements are also tracked by way of special ultra high-speed cameras to see exactly how they respond to what they’re doing. This information, which can be teased out using machine learning algorithms, will then be used to develop new algorithms that can help train large numbers of future robots. In particular, the hope is that these insights into complex decision-making can improve coordination between large teams of autonomous air and ground robots. You know, should the game be brought to life.

Patrik Stollarz/Stringer/Getty Images

For anyone who grew up on movies like The Last Starfighter, this will seem strangely familiar. Although there’s a twist here, too. In The Last Starfighter (and other sci-fi stories which tread similar ground, such as Orson Scott Card’s Ender’s Game and Ernest Cline’s Armada), the goal is to train humans to have the kind of lightning fast reflexes that would normally be found in a machine. In this case, it’s different. The purpose of the University at Buffalo’s new gaming project isn’t to make players more machine-like.

Just the opposite, in fact. It’s all about trying to make machines that think more like humans.

Training tomorrow’s swarms today

“We’re trying to recruit [participants] who have strong gaming experience,” Souma Chowdhury, assistant professor of mechanical and aerospace engineering in the School of Engineering and Applied Sciences, told Digital Trends.

Chowdhury is one of the lead investigators on the project. He pauses and gives a nervous chuckle; the slightest hint of an apology creeping into his voice. “I myself do not have gaming experience,” he said. “I’m not a computer gamer at all. But many of our students are into games like crazy.”

“We’re trying to recruit [participants] who have strong gaming experience.”

Chowdhury’s own area of interest is swarm intelligence, a branch of computer science dating back to the late-1980s. Swarm intelligence is all about the collective behavior of decentralized, self-organized systems, both virtual and robotic. “It’s a real hot topic,” he said. “It’s becoming known that there are a lot of different applications which could be done by not using a single $1 million robot, but rather a large swarm of simpler, cheaper robots. These could be ground-based, air-based, or a combination of those two approaches.”

Some researchers in swarm robotics try and create swarms that can carry out complex procedures by hand-crafting the actions of every agent involved; the way you might coach each member of a dance troop so they can master a complex routine. Put them all together and you’ll get something that looks like emergent collaboration, although it’s actually a collection of individuals doing their own thing. The idea of using modern machine learning artificial intelligence is that it could give robot swarms the ability to more autonomously function as a meaningful collective.

Douglas Levere, University at Buffalo

But that’s easier said than done. Training one robot to do something requires a significant amount of training. Training a swarm, potentially with varying abilities, to complete tasks in complex, uncertain environments is a whole lot trickier. It means running tens of thousands of simulations, making the process extremely time-consuming and expensive. The idea driving this new project is that watching humans play the game will make it easier for machines to learn.

“Imagine walking into a classroom where there’s no teacher, and saying ‘let’s learn algebra,’” Chowdhury said. “You can learn just using exercises and textbooks. But it’s going to take a lot more time. If you have a teacher you can follow it’ll make it faster. In this case, we want to see how humans play this game and then use that to significantly speed up the A.I. in learning the behavior. Before it would be necessary to run 10,000 simulations to learn. Now we only need to run perhaps 1,000 simulations and augment this with data from humans.”

The researchers believe that, by observing the type of tactical or strategic decisions humans take when they play a strategy game, it will be possible to work out which features and events motivate these actions.

Teaching the machines

“The project is ongoing, at a pretty aggressive pace,” Chowdhury said. “We are around the halfway mark.”

At present, they’ve yet to start the data-gathering phase of the project, although Chowdhury has a good idea of the format that it will take. The plan is to carry out experiments with around 25 participants. Each participant will play between six and seven games with different randomized settings and levels of complexity. Unlike games such as StarCraft, which can last for hours, in this case each game will go on for only last only between five and ten minutes. That will be sufficient to measure decision-making strategies, and for these features of interest to be extracted using algorithms and scripts developed by the team.

“Humans can come up with very unique strategies that an A.I. might not ever learn.”

“At this point, it is difficult to comment on the amount or size of data that will be eventually collected,” Chowdhury said. However, the aim is reportedly to eventually scale up to 250 aerial and ground robots, working in highly complex situations. One example might be dealing with sudden loss of visibility due to smoke. The team plans to develop algorithms, modeled on human behavior, that will allow them to adapt to challenges such as this.

“Humans can come up with very unique strategies that an A.I. might not ever learn,” he continued. “A lot of the hype we see in A.I. are in applications that are relatively deterministic environments. But in terms of contextual reasoning in a real environment to get stuff done? That’s still at a nascent stage.”

Humans make the strategies

In Daniel Kahneman’s 2011 book Thinking, Fast and Slow, the Nobel-winning economist and psychologist describes two different modes of thought. The first system is fast and instinctive, the kind of thing we might call intuition. That might be locating the source of a specific sound, completing the phrase “war and…” or, yes, blasting Ko-Dan ships out of the air (or lack thereof) in Starfighter. The second system is slower, more deliberate, more logical. It’s centered on conscious thinking — which in this case might very well refer to forming strategies.

Chowdhury doesn’t cite Kahneman’s work when he discusses the project. But it’s hard not to be reminded of it. As he points out, machines are already capable of an impressive number of autonomous features. A $10,000 drone possesses some impressive smarts when it comes to navigating between locations. The same is true with agents in a strategy game. Units are often governed by low level rules which allow them to react to their surroundings. That could mean attacking or defending if they are confronted by an enemy. It might also mean being able to maintain formations as they move around the map. But in both cases what’s missing is the overarching strategy needed to execute tasks.

“You don’t need a human to do low level control, controlling each agent,” Chowdhury said. “That’s not what we’re interested in. They’re not controlling every single robot and where they’re going. The human role is more that of a supervisor or a tactician. A good analogy would be that, in a disaster response environment, you have a supervisor. They might have a team of 100 rescuers working under them. There’s a hierarchy, but the supervisor does not tell each of those team members exactly what they should do. The rescuers make a lot of independent decisions, but the supervisor creates the overall tactics. That’s what we want to build.”

If Chowdhury and his team get their way, the robot swarms of tomorrow will be a whole lot smarter. And they’ll have gamers to thank for it.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
This AI cloned my voice using just three minutes of audio
acapela group voice cloning ad

There's a scene in Mission Impossible 3 that you might recall. In it, our hero Ethan Hunt (Tom Cruise) tackles the movie's villain, holds him at gunpoint, and forces him to read a bizarre series of sentences aloud.

"The pleasure of Busby's company is what I most enjoy," he reluctantly reads. "He put a tack on Miss Yancy's chair, and she called him a horrible boy. At the end of the month, he was flinging two kittens across the width of the room ..."

Read more
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more