Skip to main content

Nvidia's Drive PX 2 to be used in autonomous racecars for the Roborace Championship

nvidia roborace gtc2016 image by chief design officer daniel simon  ltd
Image used with permission by copyright holder
During the GPU Technology Conference keynote on Tuesday, Nvidia CEO Jen-Hsun Huang said that the company’s Drive PX 2 AI supercomputer will be used in autonomous racecars that will compete in the Roborace Championship. He also revealed that the company is working on an end-to-end mapping system using Drive PX 2-based cars and Tesla GPUs, the latter of which are typically installed in the data center.

For starters, the Roborace Championship, first reported on Digital Trends here, is part of the Formula E ePrix “electric” racing series, meaning the racecars are driverless and utilize alternative “earth-friendly” energy sources. These one-hour races consist of ten teams that use two identical cars packing Nvidia’s Drive PX 2. Because these cars don’t rely on driver intuition to win, the teams must program the AI to be highly strategic.

According to Nvidia, the Drive PX 2 supercomputer is the size of a lunchbox, keeping with the overall compact design of the cars. The Drive PX 2 is also capable of up to 24 trillion operations a second for AI applications, thus providing “supercomputer-class” performance, or the processing power of 150 MacBook Pro laptops. And because of its deep learning capability, the cars will become smarter — and thus faster — the more they actually race.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“Since the cars don’t need human drivers, these racecars are incredibly compact, and the designs — conceived by auto designer Daniel Simon, the man behind Tron: Legacy’s light cycles — are like nothing that’s been seen on a road, or a racetrack, before,” Nvidia’s Danny Shapiro said in a blog. “There’s no room in these racers for the trunk full of PCs that powered earlier generations of autonomous vehicles.”

Nvidia says the Drive PX 2 is capable of incorporating input from a number of sensors installed in the racecar such as GPS, cameras, radar, and more. Ultimately, the Drive PX 2 supercomputer and the Roborace Championship races should lead to smarter and safer driverless and standard cars for public consumption.

As for those high-definition maps mentioned in Tuesday’s keynote, the system will enable the rapid development of HD maps and frequent updates through the use of the Drive PX 2 and Tesla GPUs. These maps are important to driverless cars because they reduce the amount of processing the supercomputer performs as it incorporates inputs from multiple sensors. Just imagine how easy driving can become when you know exactly what’s up the street or around the corner.

The HD mapping system is an open platform based on Nvidia’s DriveWorks SDK.  It’s a “highly efficient” system that pushes most of the data processing onto the Drive PX 2 so that communication to the cloud is minimal. It also uses a technique that Nvidia calls “visual simultaneous localization and mapping” along with deep learning that handles the mapping process.

Nvidia says that the deep learning aspect helps detect important features during the mapmaking process, such as road signs, lanes, and landmarks. It can also recognize changes in the environment, thus the system is capable of recording and updating maps to be used by autonomous vehicles.

So why is this good news? Up until now, mapping was done by cars with numerous sensors that gathered huge volumes of data. In turn, this data was recorded and then processed offline. GPS alone is also old news, as autonomous vehicles require the use of exact details on the road. Precision is achieved when combining GPS with the car’s internal sensors and what’s called motion algorithms, which convert 2D data into 3D information.

Nvidia launched the Drive PX 2 during CES 2016 in January, billed as the world’s first in-car artificial intelligence supercomputer. Additional information regarding Nvidia’s Drive solutions can be found here.

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
Best Buy deals: Save on laptops, TVs, appliances, and more
best buy shuts down insignia line smart home products store 2 768x768

If you're looking to snag a good deal, Best Buy is probably one of the best retailers to do it, and we often draw from it for some of the best deals we put on these lists. A lot of that has to do with the massive variety of products that best Buy sells, and that includes things like the best TV deals, best laptop deals, and best phone deals, so there is always something to draw from. That said, it can be difficult to navigate all the deals and offers that are available on Best Buy, which is why we've gone out and collected some of our favorite deals across various categories, from headphones to small kitchen appliances.
Best Buy TV deals

There may be no better place to purchase one of the best TVs than Best Buy. There is almost always some huge savings to find on TVs at Best Buy, and that’s certainly the case right now. You’ll find deals top TV brands like Sony, Samsung, and LG, and more budget-friendly brands like TCL and Hisense are in play, too.

Read more
Target is selling Lenovo laptops for $150, with a catch
The Lenovo IdeaPad Slim 3 on a white background.

Considering the back to school shopping season is in full swing, now is one of the best times of the year to look for laptop deals. Of course, you’ll find markdowns on a wide array of models at just about every retailer, so sometimes finding the best discounts can be a little tough. It’s our job to stay on top of all the best sales though, and we recently came across a Target promo we’d like to share:

For a limited time, Target is selling a refurbished version of the Lenovo Ideapad Slim 3 with 4GB of RAM and 64GB of storage for $150. At full price, this model can go for upwards of $270. 

Read more
OpenAI Project Strawberry: here’s everything we know so far
a strawberry

Even as it is reportedly set to spend $7 billion on training and inference costs (with an overall $5 billion shortfall), OpenAI is steadfastly seeking to build the world's first Artificial General Intelligence (AGI). Project Strawberry is the company's next step toward that goal.
What is Project Strawberry?
Project Strawberry is OpenAI's latest (and potentially greatest) large language model, one that is expected to broadly surpass the capabilities of current state-of-the-art systems with its "human-like reasoning skills" when it is released. It might power the next generation of GPTs.
What can Strawberry do?
Project Strawberry will reportedly be a reasoning powerhouse. It will be able to solve math problems it has never seen before and act as a high-level agent, creating marketing strategies and autonomously solving complex word puzzles like the NYT's Connections. It can even "navigate the internet autonomously" to  perform "deep research," according to internal documents viewed by Reuters in July.

The Reuters report also notes that Strawberry's architecture is similar to the Self-Taught Reasoner (STaR) technique. Developed at Stanford in 2022, STaR enables a model to generate training data on which to fine-tune itself, becoming more capable over time.
Why is it called that?
We don't know the exact reason for the name "Strawberry," as that's not something OpenAI has publicly disclosed. It's a code name chosen for internal reference and to maintain secrecy during development.

Read more