Skip to main content

When diagnosis time means life or death, NVIDIA’s advanced AI can save lives

We may buy new smartphones and laptops every year or two, but when it comes expensive medical computers, that’s not an option. There are more than three million medical equipment installed in hospitals today, and more than 100,000 new instruments added each year — that’s according Nvidia CEO Jensen Huang said at the company’s GPU Technology Conference (GTC). At this rate, it would take more than 30 years to replace all the old hospital equipment. So how do we advance medical care without adding more cost?

Nvidia’s technique is to leverage the cloud to provide a “virtual upgrade” to existing medical equipment. Dubbed Project Clara, the medical cloud is described by Huang as a “medical imaging supercomputer that is a data center virtualized, remoted, and is a multi-modality, multi-user supercomputer.”

Related Videos

It’s an end-to-end solution leveraging the power of Nvidia’s GPU and its cloud infrastructure, allowing medical practitioners the capability to upload data, analyze and interpret data. Herer at GTC this year, Nvidia is showing off how it uses deep learning to make inferences to detect diseases and pathologies at an earlier state, which could save lives.

Early Detection to Save Lives

Nvidia DGX 2
Nvidia CEO Jensen Huang boasts of the GPU’s computing might at GTC 2018.

Early detection is extremely important in the case of sepsis, a disease that claims more lives each year than the combined mortality rates of breast and prostate cancers. It’s also an area of interest to Johns Hopkins University researchers, and Associate Professor of Computer Science Suchi Saria has worked on models to train AI to make early detection a reality. And similar to Huang, Saria’s AI-trained model examines existing patient sensor data without the need for hospitals to purchase more costly equipment to improve early detection and make medical diagnoses.

Deep learning has completely turbocharged modern AI.

This is particularly important because early signs of sepsis are hard to detect, and the condition is often misdiagnosed or ignored until it is too late to treat, Saria said. In fact, for each hour that treatment is delayed — likely as a result of incorrect diagnosis — the mortality rate for sepsis jumps seven to eight percent. Fortunately, though, sepsis is treatable, but only if it’s detected early, Saria noted, highlighting that the condition is the eleventh leading cause of death.

In a case study on sepsis, a female patient was admitted to Johns Hopkins Medical Center for what was believed to be medical pneumonia, Saria said. Doctors administered the usual course of antibiotics, and they weren’t too concerned. Her condition worsens, and on the seventh day, the patient demonstrated visible symptoms of septic shock and was subsequently transferred to the intensive care unit. Once she was in the ICU, her kidneys and lungs began to fail, and the patient passed away on day 22.

“The challenge is that sepsis is extremely hard to diagnose,” Saria said. A study conducted by Harvard University revealed that medical experts weren’t able to agree on early diagnosis of sepsis when presented with symptoms of the condition. However, late stages of sepsis, including septic shock, are easier to identify, but at that point, the mortality rate dramatically jumps.

To make early sepsis detection possible, Saria and her team created a framework called TREWS (pronounced “trues”), which stands for Targeted, Real-Time Early Warning System. TREWS is a class of machine learning that utilizes deep learning to identify symptoms and makes medical diagnosis.

“Deep learning has completely turbocharged modern AI,” Huang exclaimed. “This incredible algorithm that can automatically detect important features out of data, and from this algorithm it can construct hierarchically knowledge representations. And if you feed it more data, it will become more and more robust.”

The earlier the detection, the better.

Making comparisons to a modern smartphone, the Nvidia medical cloud essentially allows hospitals and medical providers to upload existing data and information collected from patients, create models and leverage the power of artificial intelligence. The result is that diseases can be detected earlier, pathologies can be modeled and easier to understand, and scans become richer with more details and information.

The system “leverages high dimensional, noisy health system data to build things that are very patient specific,” explained Saria. “This brings precision into precision healthcare.” Essentially, TREWS takes a look at all the data and “moves from a reactive to a proactive prevention system.”

The challenge with deep learning, Huang explained, is that “it needs a ton of data and a ton of computers.”

In the case of identifying sepsis, Saria relies on historical data from past patients, utilizing a sequential learning framework, and the end result of TREWS is to have the AI system detect sepsis as early as possible and alert doctors, or ask doctors to perform more tests to determine if the patient does in fact have sepsis.

In the example of the patient succumbing to sepsis, AI would have been able to detect sepsis 12 hours before doctors did. But even beyond sepsis, AI could be used to detect all sorts of other medical conditions such as hypertension and heart disease, Saria said.

Inferring More Information

To demonstrate some of the advancements that Nvidia’s hardware and software provides researchers with, AI was used to make inferences to show how a patient’s left ventricle would look in 3D and display data such as the heart’s ejection fraction.

In another example, Philips, makers of ultrasound machines, was able to take a black-and-white ultrasound of a fetus in 2D and reconstruct it into a three-dimensional image complete to give it a more life-like appearance. Additionally, using the GPU’s ray tracing capabilities, the ultrasound scan was visualized as if there was virtual light inside the uterus, complete with some subsurface scattering for tones of flesh.

Outside of Project Clara, Nvidia is also building the hardware needed to make some of these complex processes happen quickly. Medical imaging requires even more powerful hardware, and Huang claims that “the world wants a gigantic GPU.”

To facilitate early disease detection, more comprehensive scans and deep machine learning and artificial intelligence uses, Nvidia introduced the DGX-2 supercomputer at the GPU Technology Conference.

The company states that the DGX-2 is up to ten times faster than the DGX-1, which was introduced a mere six months prior, and the system is capable of replacing 300 dual-core CPU servers valued at $3 million. Essentially, the DGX-2 is an eighth the cost and occupies a sixtieth the space of existing equipment, while only consuming only an eighteenth the power.

“Our strategy at Nvidia is to advance GPU computing for deep learning in AI at the speed of light,” Huang said.

Editors' Recommendations

Robots and AI are coming for our jobs. Can augmentation save us from automation?
brain stent exoskeleton control robot augmentation automation

The American truck driver is soon to be an endangered species. Some 3.5 million professionals get behind the wheel of trucks in the United States every year, making it one of the most common jobs in the country. In a couple decades, every last one may be out of work due to automation.

Industry giants around the world are investing in autonomous vehicles. In Australian mines, Rio Tinto employs hundred-ton driverless trucks to transport iron ore. Volvo is seeking volunteers willing to be ferried around London’s winding streets with no one at the wheel. MIT researchers recently determined the most efficient way for driverless trucks to transport goods -- something called "platooning." The guy behind Google’s first self-driving car now runs autonomous trucking startup Otto in San Francisco.

Read more
Newly developed AI system can accurately judge a book by its cover
many old books in a book shop or library

The tech world sure loves to disrupt conventional wisdom. Its latest victim? The old adage that you should never judge a book by its cover.

With disproving that sentiment in mind, researchers at Japan’s Kyushu University have trained a neural network to be able to predict which genre a book falls into simply by studying its cover.

Read more
Can tech reduce our regrets? AI platform promises to help groups make better decisions
electionland google trends map 2016 election polling issues voting ballot box registration elections voter turnout curtain fe

A team of computer scientists from Carnegie Mellon and Harvard universities has developed what they hope will be a revolutionary tool to help humans make better group decisions. RoboVote uses artificial intelligence and machine learning to guide participants towards the best possible answers to preferential and factual polls.

The platform was developed by computer scientist Ariel Procaccia, his team at Carnegie Mellon, and researchers from Harvard. Their goal was to leverage AI and decades of social choice research to facilitate collective decision making.

Read more