Skip to main content

When diagnosis time means life or death, NVIDIA’s advanced AI can save lives

We may buy new smartphones and laptops every year or two, but when it comes expensive medical computers, that’s not an option. There are more than three million medical equipment installed in hospitals today, and more than 100,000 new instruments added each year — that’s according Nvidia CEO Jensen Huang said at the company’s GPU Technology Conference (GTC). At this rate, it would take more than 30 years to replace all the old hospital equipment. So how do we advance medical care without adding more cost?

Nvidia’s technique is to leverage the cloud to provide a “virtual upgrade” to existing medical equipment. Dubbed Project Clara, the medical cloud is described by Huang as a “medical imaging supercomputer that is a data center virtualized, remoted, and is a multi-modality, multi-user supercomputer.”

Recommended Videos

It’s an end-to-end solution leveraging the power of Nvidia’s GPU and its cloud infrastructure, allowing medical practitioners the capability to upload data, analyze and interpret data. Herer at GTC this year, Nvidia is showing off how it uses deep learning to make inferences to detect diseases and pathologies at an earlier state, which could save lives.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Early Detection to Save Lives

Nvidia DGX 2
Nvidia CEO Jensen Huang boasts of the GPU’s computing might at GTC 2018. Image used with permission by copyright holder

Early detection is extremely important in the case of sepsis, a disease that claims more lives each year than the combined mortality rates of breast and prostate cancers. It’s also an area of interest to Johns Hopkins University researchers, and Associate Professor of Computer Science Suchi Saria has worked on models to train AI to make early detection a reality. And similar to Huang, Saria’s AI-trained model examines existing patient sensor data without the need for hospitals to purchase more costly equipment to improve early detection and make medical diagnoses.

Deep learning has completely turbocharged modern AI.

This is particularly important because early signs of sepsis are hard to detect, and the condition is often misdiagnosed or ignored until it is too late to treat, Saria said. In fact, for each hour that treatment is delayed — likely as a result of incorrect diagnosis — the mortality rate for sepsis jumps seven to eight percent. Fortunately, though, sepsis is treatable, but only if it’s detected early, Saria noted, highlighting that the condition is the eleventh leading cause of death.

In a case study on sepsis, a female patient was admitted to Johns Hopkins Medical Center for what was believed to be medical pneumonia, Saria said. Doctors administered the usual course of antibiotics, and they weren’t too concerned. Her condition worsens, and on the seventh day, the patient demonstrated visible symptoms of septic shock and was subsequently transferred to the intensive care unit. Once she was in the ICU, her kidneys and lungs began to fail, and the patient passed away on day 22.

“The challenge is that sepsis is extremely hard to diagnose,” Saria said. A study conducted by Harvard University revealed that medical experts weren’t able to agree on early diagnosis of sepsis when presented with symptoms of the condition. However, late stages of sepsis, including septic shock, are easier to identify, but at that point, the mortality rate dramatically jumps.

To make early sepsis detection possible, Saria and her team created a framework called TREWS (pronounced “trues”), which stands for Targeted, Real-Time Early Warning System. TREWS is a class of machine learning that utilizes deep learning to identify symptoms and makes medical diagnosis.

“Deep learning has completely turbocharged modern AI,” Huang exclaimed. “This incredible algorithm that can automatically detect important features out of data, and from this algorithm it can construct hierarchically knowledge representations. And if you feed it more data, it will become more and more robust.”

The earlier the detection, the better.

Making comparisons to a modern smartphone, the Nvidia medical cloud essentially allows hospitals and medical providers to upload existing data and information collected from patients, create models and leverage the power of artificial intelligence. The result is that diseases can be detected earlier, pathologies can be modeled and easier to understand, and scans become richer with more details and information.

The system “leverages high dimensional, noisy health system data to build things that are very patient specific,” explained Saria. “This brings precision into precision healthcare.” Essentially, TREWS takes a look at all the data and “moves from a reactive to a proactive prevention system.”

The challenge with deep learning, Huang explained, is that “it needs a ton of data and a ton of computers.”

In the case of identifying sepsis, Saria relies on historical data from past patients, utilizing a sequential learning framework, and the end result of TREWS is to have the AI system detect sepsis as early as possible and alert doctors, or ask doctors to perform more tests to determine if the patient does in fact have sepsis.

In the example of the patient succumbing to sepsis, AI would have been able to detect sepsis 12 hours before doctors did. But even beyond sepsis, AI could be used to detect all sorts of other medical conditions such as hypertension and heart disease, Saria said.

Inferring More Information

To demonstrate some of the advancements that Nvidia’s hardware and software provides researchers with, AI was used to make inferences to show how a patient’s left ventricle would look in 3D and display data such as the heart’s ejection fraction.

In another example, Philips, makers of ultrasound machines, was able to take a black-and-white ultrasound of a fetus in 2D and reconstruct it into a three-dimensional image complete to give it a more life-like appearance. Additionally, using the GPU’s ray tracing capabilities, the ultrasound scan was visualized as if there was virtual light inside the uterus, complete with some subsurface scattering for tones of flesh.

Outside of Project Clara, Nvidia is also building the hardware needed to make some of these complex processes happen quickly. Medical imaging requires even more powerful hardware, and Huang claims that “the world wants a gigantic GPU.”

To facilitate early disease detection, more comprehensive scans and deep machine learning and artificial intelligence uses, Nvidia introduced the DGX-2 supercomputer at the GPU Technology Conference.

The company states that the DGX-2 is up to ten times faster than the DGX-1, which was introduced a mere six months prior, and the system is capable of replacing 300 dual-core CPU servers valued at $3 million. Essentially, the DGX-2 is an eighth the cost and occupies a sixtieth the space of existing equipment, while only consuming only an eighteenth the power.

“Our strategy at Nvidia is to advance GPU computing for deep learning in AI at the speed of light,” Huang said.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
New Mac Studio release date, price and everything you need to know
Apple Mac Studio with M4 Max and M3 Ultra chips and two Apple Studio Display monitors.

The Apple Mac Studio has always packed a ton of power into a very diminutive block of what feels a lot like solid aluminum. It's designed to look like it's floating in air, and the majority of its ports are in the back and out of the way. There's simply no other desktop machine that takes up so little space and, frankly, looks so great on your desk.

The new model maintains all the best characteristics that makes it one of the best desktops while dramatically increasing the power. The previous generation hadn't yet received the faster GPU and Neural Engine performance, and that's now on tap. It's more expensive than ever, but if you need it, then it looks like it will deliver.
Release date and price
The new Mac Studio was announced on March 5, 2025, and will be available starting on March 12, 2025. That's earlier than many predictions, and very soon after its announcement.

Read more
The new Mac Studio absolutely baffles me in one key way
Apple Mac Studio with M4 Max and M3 Ultra chips and two Apple Studio Display monitors.

Way back when Steve Jobs returned to Apple and saved it from bankruptcy, he implemented his famous product quadrant: Apple should have desktops and laptops for consumers and professionals. These four categories should contain just one of the best Macs each -- no more, no less.

The idea was that you should be able to instantly differentiate each device and know who it’s for and what it does, and it worked incredibly effectively. Yet when I look at the new Mac Studio that Apple unveiled today, I get the feeling that Steve Jobs would be most displeased.

Read more
Google AI Mode will reinvent Search. I’m worried — and you should be, too
Google AI Mode for Search.

Google is pushing forward with more AI into how internet search works. Remember AI Overviews, which essentially summarizes the content pulled from websites, and presents it at the top of the Google Search page?

That error-prone feature is now expanding to the US market, powered by the new Gemini 2.0 AI models. It no longer requires a Google account sign-in, and has opened to users across all age groups. While that is a risky move in itself, Google is giving a similar blanket treatment to the whole Search page with a new AI Mode.

Read more