Skip to main content

17-year-old uses deep learning to program AI cars that race around in your browser

self driving car browser selfdriving head
German software engineer Jan Hünermann watches two autonomous cars — one colored pink, the other turquoise — race around a track. There are various obstacles set up to confound them, but thanks to the brain-inspired neural networks that provide them with their intelligence, the cars smoothly navigate these obstacles with the confidence of seasoned pros.

From time to time, Hünermann throws a new obstacle in their path, and then watches with satisfaction as the cars dodge this new impediment. Best of all? The longer he watches them, the smarter the cars become: learning from their mistakes until they can handle just about any scenario that comes their way.

Related Videos

There are a couple of unusual things about the scenario. The first is that Hünermann is only 17 years old, impressively young to be coding autonomous cars. The second is that the cars don’t actually exist. Or at least they don’t exist outside of a couple of crudely-rendered sprites in a web browser.

This is Hünermann’s “Self-Driving Cars In A Browser” project; one which… well, does what it says on the tin, really. It’s a web app designed to “create a fully self-learning agent” that’s able to navigate a pair of cars through an ever-changing 2D environment. The “ever-changing” bit comes down to the individual users, who are able to use their mouse to click and drag new items onto the preexisting map.

Picture a solid vector suddenly appearing in the middle of the freeway on your commute to work, and you’ll have some sympathy for what Hünermann’s long-suffering cars are faced with!

The idea for the project hit Hünermann a couple of years ago when he was a high school sophomore. Like every else who follows tech, he marveled at the news coming out of Google DeepMind, showing how the cutting-edge research team there had used a combination of reinforcement learning (a type of AI that works toward specific goals, through trial-and-error) and deep learning neural networks to build bots which could work out how to play old Atari games. Unlike the intelligent agents that make up non-player characters (NPCs) in video games, these bots were able to learn video games without anyone explicitly telling them what to do.

At the time, Hünermann was focused on building iOS apps and websites for computer-based extracurricular activities. With limited resources, however, he decided to follow Google’s example. He went ahead and downloaded DeepMind’s paper, read it, and decided to have a go at coding his own project.

“I was really interested in this field of deep learning and wanted to get to know it,” Hünermann told Digital Trends. “I thought that one possible way to do that would be to create a self-driving car project. I didn’t actually have a car, so I decided to do it in the browser.”

The virtual cars themselves boast 19-distance sensors, which come out of the car in different directions. You can picture these like torch beams, with each beam starting out strong and then getting fainter the further away from the vehicle they are. The shorter the beam, the higher the input the agent receives when it comes into contact with something, similar to parking sensors which beep more rapidly the closer you get to a way. When taken in conjunction with the speed of a car and knowledge about the action it is taking, the cars provide 158 dimensions of information.

This data is then fed into a multi-layer neural network. The more the cars drive and crash, the more the “weights” connecting the network’s different  nodes are adjusted so that it can learn what to do. The result is that, like any human skill, the longer the cars practice driving, the better they get.

They’re not perfect, of course. In particular, the cars can tend to be a bit optimistic when it comes to the size of a gap they can squeeze through, since the sensor positioned at the front of the car spots open road, without always taking into account the cars’ width. Still, it’s impressive stuff — and the point is that it’s getting more impressive all the time.

“One thing I’d like to add is more intelligence so that the cars can realize that they’re stuck, and back up and try another route,” Hünermann continued. “It would also be really interesting to add traffic, and maybe even lanes as well. The idea is to get it to reflect, as closely as possible, the real world.”

If you want to follow what he’s doing with the project, Hünermann has made the code for the demo, along with the entire JavaScript library, available on GitHub. Given that real-life self driving cars are based on the same kinds of neural networks used here, Hünermann’s creation is a great way to get to grips with a simplified version of the tech that’s (no pun intended) driving real-world autonomous car projects.

As to what’s next for himself, Hünermann is off to study Computer Science at university in England this year. “I’d like to do this as a job,” he said. “I’m absolutely fascinated by this area of research.”

Who knows: by the time he arrives in the U.K., he may even be legally old enough to drive himself!

Editors' Recommendations

Opera is adding AI features to its browser following ChatGPT surge
Opera GX web browser.

The web browser Opera is the latest to share plans for AI integration with ChatGPT. Its parent company, the Chinese brand Kunlun Tech, first announced its plans on Wednesday with few details, according to CNBC.

However, the Norway-based Opera has since revealed some of the details of how its ChatGPT-based browser will work.

Read more
Microsoft is bringing ChatGPT to your browser, and you can test it out right now
Microsoft's redesigned Bing search engine.

Microsoft CEO Satya Nadella confirmed in a private briefing with the press that a ChatGPT-powered version of the Edge browser and Bing search engine is available now. The overhauled search and web browsing experience is designed for natural-language questions, replacing critical aspects of the browser with AI tools.

That might sound familiar. Google and other search engines have been leveraging AI for several years to compile search results, but Microsoft's take is different. It's "your AI copilot for the web," offering up new search, answer, chat, and create functions.

Read more
These 7 AI creation tools show how much AI can really do
Metaphor works like DALL-E and Stable Diffusion but uses AI to fill in prompts with links instead of text or images.

Between the text generator ChatGPT and image generators like Stable Diffusion, it's safe to say that AI-powered creative tools are taking the internet by storm.

As exciting as these two examples are, though, they're really only scratching the surface. There are all sorts of different tools and applications that do amazing things with AI and reveal just how revolutionary they'll continue to be in the future.
Metaphor search
Metaphor has been described as an AI-powered link autocomplete. The tool works similarly to systems such as GPT-3, DALL-E, and Stable Diffusion but uses AI to fill in prompts with links instead of text or images. You have to have a Discord account to register; however, you can experiment with the templates on the Metaphor homepage to see how the AI system works.

Read more