Skip to main content

17-year-old uses deep learning to program AI cars that race around in your browser

German software engineer Jan Hünermann watches two autonomous cars — one colored pink, the other turquoise — race around a track. There are various obstacles set up to confound them, but thanks to the brain-inspired neural networks that provide them with their intelligence, the cars smoothly navigate these obstacles with the confidence of seasoned pros.

From time to time, Hünermann throws a new obstacle in their path, and then watches with satisfaction as the cars dodge this new impediment. Best of all? The longer he watches them, the smarter the cars become: learning from their mistakes until they can handle just about any scenario that comes their way.

Recommended Videos

There are a couple of unusual things about the scenario. The first is that Hünermann is only 17 years old, impressively young to be coding autonomous cars. The second is that the cars don’t actually exist. Or at least they don’t exist outside of a couple of crudely-rendered sprites in a web browser.

This is Hünermann’s “Self-Driving Cars In A Browser” project; one which… well, does what it says on the tin, really. It’s a web app designed to “create a fully self-learning agent” that’s able to navigate a pair of cars through an ever-changing 2D environment. The “ever-changing” bit comes down to the individual users, who are able to use their mouse to click and drag new items onto the preexisting map.

Picture a solid vector suddenly appearing in the middle of the freeway on your commute to work, and you’ll have some sympathy for what Hünermann’s long-suffering cars are faced with!

The idea for the project hit Hünermann a couple of years ago when he was a high school sophomore. Like every else who follows tech, he marveled at the news coming out of Google DeepMind, showing how the cutting-edge research team there had used a combination of reinforcement learning (a type of AI that works toward specific goals, through trial-and-error) and deep learning neural networks to build bots which could work out how to play old Atari games. Unlike the intelligent agents that make up non-player characters (NPCs) in video games, these bots were able to learn video games without anyone explicitly telling them what to do.

At the time, Hünermann was focused on building iOS apps and websites for computer-based extracurricular activities. With limited resources, however, he decided to follow Google’s example. He went ahead and downloaded DeepMind’s paper, read it, and decided to have a go at coding his own project.

“I was really interested in this field of deep learning and wanted to get to know it,” Hünermann told Digital Trends. “I thought that one possible way to do that would be to create a self-driving car project. I didn’t actually have a car, so I decided to do it in the browser.”

The virtual cars themselves boast 19-distance sensors, which come out of the car in different directions. You can picture these like torch beams, with each beam starting out strong and then getting fainter the further away from the vehicle they are. The shorter the beam, the higher the input the agent receives when it comes into contact with something, similar to parking sensors which beep more rapidly the closer you get to a way. When taken in conjunction with the speed of a car and knowledge about the action it is taking, the cars provide 158 dimensions of information.

This data is then fed into a multi-layer neural network. The more the cars drive and crash, the more the “weights” connecting the network’s different  nodes are adjusted so that it can learn what to do. The result is that, like any human skill, the longer the cars practice driving, the better they get.

They’re not perfect, of course. In particular, the cars can tend to be a bit optimistic when it comes to the size of a gap they can squeeze through, since the sensor positioned at the front of the car spots open road, without always taking into account the cars’ width. Still, it’s impressive stuff — and the point is that it’s getting more impressive all the time.

“One thing I’d like to add is more intelligence so that the cars can realize that they’re stuck, and back up and try another route,” Hünermann continued. “It would also be really interesting to add traffic, and maybe even lanes as well. The idea is to get it to reflect, as closely as possible, the real world.”

If you want to follow what he’s doing with the project, Hünermann has made the code for the demo, along with the entire JavaScript library, available on GitHub. Given that real-life self driving cars are based on the same kinds of neural networks used here, Hünermann’s creation is a great way to get to grips with a simplified version of the tech that’s (no pun intended) driving real-world autonomous car projects.

As to what’s next for himself, Hünermann is off to study Computer Science at university in England this year. “I’d like to do this as a job,” he said. “I’m absolutely fascinated by this area of research.”

Who knows: by the time he arrives in the U.K., he may even be legally old enough to drive himself!

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
DeepSeek readies the next AI disruption with self-improving models
DeepSeek AI chatbot running on an iPhone.

Barely a few months ago, Wall Street’s big bet on generative AI had a moment of reckoning when DeepSeek arrived on the scene. Despite its heavily censored nature, the open source DeepSeek proved that a frontier reasoning AI model doesn’t necessarily require billions of dollars and can be pulled off on modest resources.

It quickly found commercial adoption by giants such as Huawei, Oppo, and Vivo, while the likes of Microsoft, Alibaba, and Tencent quickly gave it a spot on their platforms. Now, the buzzy Chinese company’s next target is self-improving AI models that use a looping judge-reward approach to improve themselves.

Read more
Toyota shifts gears: 15 New EVs and a million cars by 2027
Front three quarter view of the 2023 Toyota bZ4X.

After years of cautiously navigating the electric vehicle (EV) market, Toyota is finally ramping up its commitment to fully electric vehicles.
The Japanese automaker, which has long relied on hybrids, is now planning to develop about 15 fully electric models by 2027, up from five currently. These models will include vehicles under the Toyota and Lexus brands, with production expected to reach 1 million units annually by that year, according to a report from Nikkei.
This strategy marks a significant shift for Toyota, which has thus far remained conservative in its approach to electric cars. The company sold just 140,000 EVs globally in 2024—representing less than 2% of its total global sales. Despite this, Toyota is aiming for a much larger presence in the EV market, targeting approximately 35% of its global production to be electric by the end of the decade.
The Nikkei report suggests the company plans to diversify its production footprint beyond Japan and China and expanding into the U.S., Thailand, and Argentina. This would help mitigate the impact of President Donald Trump’s 25% tariffs on all car imports, as well as reduce delivery times. Toyota is also building a battery plant in North Carolina.
For now, Toyota has only two fully electric vehicles on the U.S. market: The bZ4X  and the Lexus RZ models. The Japanese automaker is expected to introduce new models like the bZ5X and a potential electric version of the popular Tacoma pickup.
Separately, Toyota and Honda, along with South Korea’s Hyundai, all announced on April 4 that they would not be raising prices, at least over the next couple of months, following the imposition of U.S. tariffs. According to a separate Nikkei report, Toyota’s North American division has told its suppliers that it will absorb the extra costs of parts imported from Mexico and Canada. Another 25% for automotive parts imported to the U.S. is slated to come into effect on May 3.

Read more
Tesla, Warner Bros. dodge some claims in ‘Blade Runner 2049’ lawsuit, copyright battle continues
Tesla Cybercab at night

Tesla and Warner Bros. scored a partial legal victory as a federal judge dismissed several claims in a lawsuit filed by Alcon Entertainment, a production company behind the 2017 sci-fi movie Blade Runner 2049, Reuters reports.
The lawsuit accused the two companies of using imagery from the film to promote Tesla’s autonomous Cybercab vehicle at an event hosted by Tesla CEO Elon Musk at Warner Bros. Discovery (WBD) Studios in Hollywood in October of last year.
U.S. District Judge George Wu indicated he was inclined to dismiss Alcon’s allegations that Tesla and Warner Bros. violated trademark law, according to Reuters. Specifically, the judge said Musk only referenced the original Blade Runner movie at the event, and noted that Tesla and Alcon are not competitors.
"Tesla and Musk are looking to sell cars," Reuters quoted Wu as saying. "Plaintiff is plainly not in that line of business."
Wu also dismissed most of Alcon's claims against Warner Bros., the distributor of the Blade Runner franchise.
However, the judge allowed Alcon to continue its copyright infringement claims against Tesla for its alleged use of AI-generated images mimicking scenes from Blade Runner 2049 without permission.
Alcan says that just hours before the Cybercab event, it had turned down a request from Tesla and WBD to use “an icononic still image” from the movie.
In the lawsuit, Alcon explained its decision by saying that “any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account.”
Alcon further said it did not want Blade Runner 2049 “to be affiliated with Musk, Tesla, or any Musk company, for all of these reasons.”
But according to Alcon, Tesla went ahead with feeding images from Blade Runner 2049 into an AI image generator to yield a still image that appeared on screen for 10 seconds during the Cybercab event. With the image featured in the background, Musk directly referenced Blade Runner.
Alcon also said that Musk’s reference to Blade Runner 2049 was not a coincidence as the movie features a “strikingly designed, artificially intelligent, fully autonomous car.”

Read more