Skip to main content

Nvidia hits the brakes on public autonomous tests after fatal Uber crash

Nvidia is now halting all tests regarding autonomous vehicle driving on public roads. The company formerly tested its driver-free technology in California, New Jersey, Japan, and Germany. But the fatal crash in Arizona involving one of Uber’s self-driving cars pushed Nvidia into re-thinking its strategy. Just one error can be devastating.

“The accident was tragic. It’s a reminder of how difficult self-driving car technology is and that it needs to be approached with extreme caution and the best safety technologies,” a Nvidia spokesperson said. “This tragedy is exactly why we’ve committed ourselves to perfecting this life-saving technology.” 

Recommended Videos

A driver-free Uber vehicle struck a pedestrian late Sunday night in Tempe, Arizona. Elaine Herzberg, 49, was walking outside of the crosswalk when she was struck by the vehicle. She was rushed off to a hospital but died later from the injuries. Uber has since halted all autonomous vehicle testing on public roads. 

A big chunk of Nvidia’s keynote during its GPU Technology Conference opening focused on autonomous vehicles. Nvidia founder Jen-Hsun Huang admitted that safety is the hardest computing problem. Because so much is at stake, it needs to be addressed “step by step” to prevent future accidents similar to what happened in Tempe and Uber’s vehicle. 

“This is the ultimate deep-learning, A.I. problem,” he said. “We have to manage faults even when we detect them. The bar for functional safety is really, really high. We’ve dedicated our last five to seven years to understanding this system. We are trying to understand this from end to end.” 

He believes that autonomous vehicles will drive better than humans. They will be the staple of society as humans move away from cites due to overcrowding. Humans are also becoming more dependent on Amazon-like services where products are shipped to their doorsteps rather than customers venturing out to the store. Another 1 billion vehicles will come into society over the next 12 years, he predicted. 

For now, until Nvidia understands why the Uber vehicle struck a pedestrian, the company will depend on simulations and private lots to train its autonomous vehicle technology. As for its “fleet” of manually driven data collection vehicles, they will continue to roll across America’s highways. 

One topic discussed during Tuesday’s keynote focused on perception: The ability for the car to understand its surroundings. That includes the perception of space, distance, objects of any shape, scenes, paths, the weather and more totaling 10 “networks.” Nvidia plans to assign ten high-powered DGX-2 systems to each network. 

Huang also introduced the company’s next-generation supercomputer for self-driving cars called Drive Orin. The successor to the current Drive Pegasus model, it combines multiple Pegasus computers into one Orin package, providing more computing power in the same physical space. The company set out to require less power from the battery too, increasing the vehicle’s overall mileage. 

Also during the keynote, Nvidia showcased means for remotely taking control of a real-world autonomous vehicle using a virtual reality headset.  

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
ChatGPT now interprets photos better than an art critic and an investigator combined
OpenAI press image

ChatGPT's recent image generation capabilities have challenged our previous understing of AI-generated media. The recently announced GPT-4o model demonstrates noteworthy abilities of interpreting images with high accuracy and recreating them with viral effects, such as that inspired by Studio Ghibli. It even masters text in AI-generated images, which has previously been difficult for AI. And now, it is launching two new models capable of dissecting images for cues to gather far more information that might even fail a human glance.

OpenAI announced two new models earlier this week that take ChatGPT's thinking abilities up a notch. Its new o3 model, which OpenAI calls its "most powerful reasoning model" improves on the existing interpretation and perception abilities, getting better at "coding, math, science, visual perception, and more," the organization claims. Meanwhile, the o4-mini is a smaller and faster model for "cost-efficient reasoning" in the same avenues. The news follows OpenAI's recent launch of the GPT-4.1 class of models, which brings faster processing and deeper context.

Read more
Microsoft’s Copilot Vision AI is now free to use, but only for these 9 sites
Copilot Vision graphic.

After months of teasers, previews, and select rollouts, Microsoft's Copilot Vision is now available to try for all Edge users in the U.S. The flashy new AI tool is designed to watch your screen as you browse so you can ask it various questions about what you're doing and get useful context-appropriate responses. The main catch, however, is that it currently only works with nine websites.

For the most part, these nine websites seem like pretty random choices, too. We have Amazon, which makes sense, but also Geoguessr? I'm pretty sure the point of that site is to try and guess where you are on the map without any help. Anyway, the full site list is as follows:

Read more
Fun things to ask ChatGPT now that it remembers everything
ChatGPT on a laptop

If you hadn't heard, ChatGPT's memory just got a whole lot better. Rolled out across the world to Plus and Pro users over the past few days, ChatGPT's various models can now reference almost any past conversation you had. It doesn't remember everything word for word, but can pull significant details, themes, and important points of reference from just about anything you've ever said to it.

It feels a little creepy at times, but ChatGPT can now be used for much more personalized tasks. OpenAI pitches this as a way to improve its scheduling feature to use it as a personal assistant, or to help you continue longer chats over extended periods of time. But it's also quite fun to see what ChatGPT can tell you by trawling throughh all your chatlogs. It's often surprising some of the answers it spits out in response.

Read more