Skip to main content

Nvidia’s new A.I. creates entire virtual cities by watching dash cam videos

Nvidia

From the Grand Theft Auto franchise to the plethora of available Spider-Man titles, plenty of video games allow you to explore a three-dimensional representation of a real (or thinly fictionalized) city. Creating these cityscapes isn’t easy, however. The task requires thousands of hours of computer modeling and careful reference studies before players have the chance to walk, drive or fly through the completed virtual world.

An impressive new tech demo from Nvidia shows that there is another way, however. Shown off at the NeurIPS artificial intelligence conference in Montreal, the tech company showcased how machine learning technology can generate a convincing virtual city simply by showing it dash cam videos. These videos were gathered from self-driving cars, during a one-week trial driving around cities. The neural network training process took around one week using Nvidia’s Tesla V100 GPUs on a DGX-1 supercomputer system. Once the A.I. had learned what it was looking at and figured out how to segment this into color-coded objects, the virtual cities were generated using the Unreal Engine 4.

“One of the main obstacles developers face is that creating content for virtual worlds is expensive,” Bryan Catanzaro, vice president of applied deep learning research at Nvidia, told Digital Trends. “It can take dozens of artists months to create an interactive world for games or VR applications. We’ve created a new way to render content using deep learning — using A.I. that learns from the real world — which could help artists and developers create virtual environments at a much lower cost.”

Research at NVIDIA: The First Interactive AI Rendered Virtual World

Catanzaro said that there are myriad potential real-world applications for this technology. For instance, it could allow users to customize avatars in games by taking a short video with their cell phone and then uploading it. This could also be used to create amusing videos in which the user’s features are mapped onto another body’s movement. (As seen in the above video, Nvidia had some fun making one of its developers perform the Gangnam Style dance.)

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“Architects could [additionally] use it to render virtual designs for their clients,” Catanzaro continued. “You could use this technique to train robots or self-driving cars in virtual environments. In all of these cases, it would lower the cost and time it takes to create virtual worlds.”

He added that this is still early research, and will take “a few years” to mature and roll out in commercial applications. “But I’m excited that it could fundamentally change the way computer graphics are created,” Catanzaro said.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
New A.I. hearing aid learns your listening preferences and makes adjustments
Widex Moment hearing aids.

One of the picks for this year’s CES 2021 Innovation Awards is a smart hearing aid that uses artificial intelligence to improve the audio experience in a couple of crucial ways.

Among the improvements the Widex Moment makes to conventional hearing aids is reducing the standard sound delay experienced by wearers from 7 to 10 milliseconds seconds down to just 0.5 milliseconds. This results in a more natural sound experience for users, rather than the out-of-sync audio experience people have had to settle for up until now.

Read more
Futuristic new appliance uses A.I. to sort and prep your recycling
Lasso

Lasso: The power to change recycling for good

Given the potential planet-ruining stakes involved, you’d expect that everyone on Earth would be brilliant at recycling. But folks are lazy and, no matter how much we might see footage of plastic-clogged oceans on TV, the idea of sorting out the plastic, glass, and paper for the weekly recycling day clearly strikes many as just a little bit too much effort.

Read more
A.I. teaching assistants could help fill the gaps created by virtual classrooms
AI in education kid with robot

There didn’t seem to be anything strange about the new teaching assistant, Jill Watson, who messaged students about assignments and due dates in professor Ashok Goel’s artificial intelligence class at the Georgia Institute of Technology. Her responses were brief but informative, and it wasn’t until the semester ended that the students learned Jill wasn’t actually a “she” at all, let alone a human being. Jill was a chatbot, built by Goel to help lighten the load on his eight other human TAs.

"We thought that if an A.I. TA would automatically answer routine questions that typically have crisp answers, then the (human) teaching staff could engage the students on the more open-ended questions," Goel told Digital Trends. "It is only later that we became motivated by the goal of building human-like A.I. TAs so that the students cannot easily tell the difference between human and A.I. TAs. Now we are interested in building A.I. TAs that enhance student engagement, retention, performance, and learning."

Read more