Skip to main content

AI learns how to tackle new situations by studying how humans play games

nestor ai paying attention artificial intelligence
Image used with permission by copyright holder
If artificial intelligence is going to excel at driving cars or performing other complex tasks that we humans take for granted, then it needs to learn how to respond to unknown circumstances. That is the task of machine learning, which needs real-world examples to study.

So far, however, most data used to train machine-learning systems comes from virtual environments. A group of researchers, including a Microsoft Research scientist from the U.K., have set out to change that by using game replay data that can show an AI how humans tackle complex problems.

The researchers used Atari 2600 game replays to provide real-world data to a deep learning system that uses trial and error, or reinforcement learning (RL), to tackle new tasks in a previously unknown environment. The data used in the study represents what the researchers called the “largest and most diverse such data set” that has ever been publicly released.

The data was gathered by making a web-based Atari 2600 emulator, called the Atari Grand Challenge, available using the Javatari tool written in Javascript. The researchers used a form of gamified crowdsourcing, which leveraged people’s desire to play games in order to be helpful along with a reward mechanism that ranked each player’s performance.

Around 9.7 million frames or about 45 hours of gameplay time were collected and analyzed. Five games were used in creating the data based on their varying levels of difficulty and complexity: Video Pinball, Qbert, Space Invaders, Ms. Pac-Man, and Montezuma’s Revenge.

The results have been promising so far. By feeding information into the system like player actions taken during the games, in-game rewards, and current scores, the researchers were able to demonstrate the value of using this kind of data to train machine learning systems. Going forward, the researchers hope to use professional players to improve the data’s ability to train AI that is even better at responding to unknown situations.

Editors' Recommendations

Mark Coppock
Mark has been a geek since MS-DOS gave way to Windows and the PalmPilot was a thing. He’s translated his love for…
Elon Musk’s new AI company aims to ‘understand the universe’
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

Elon Musk has just formed a new company that will seek to “understand the true nature of the universe.” No biggie, then.

Announced on Wednesday, the company, xAI, already has among its ranks artificial intelligence (AI) experts formerly of firms such as DeepMind, OpenAI, Google Research, Microsoft Research, and Tesla.

Read more
OpenAI building new team to stop superintelligent AI going rogue
A digital brain on a computer interface.

If the individuals who are at the very forefront of artificial intelligence technology are commenting about the potentially catastrophic effects of highly intelligent AI systems, then it's probably wise to sit up and take notice.

Just a couple of months ago, Geoffrey Hinton, a man considered one of the “godfathers” of AI for his pioneering work in the field, said that the technology's rapid pace of development meant that it was “not inconceivable” that superintelligent AI -- considered as being superior to the human mind -- could end up wiping out humanity.

Read more
How Intel could use AI to tackle a massive issue in PC gaming
Ellie looking concerned.

Intel is making a big push into the future of graphics. The company is introducing seven new research papers to Siggraph 2023, an annual graphics conference, one of which tries to address VRAM limitations in modern GPUs with neural rendering.

The paper aims to make real-time path tracing possible with neural rendering. No, Intel isn't introducing a DLSS 3 rival, but it is looking to leverage AI to render complex scenes. Intel says the "limited amount of onboard memory [on GPUs] can limit practical rendering of complex scenes." Intel is introducing a neural level of detail representation of objects, and it says it can achieve compression rates of 70% to 95% compared to "classic source representations, while also improving quality over previous work."

Read more