Skip to main content

Nvidia’s new Tesla cards meet the needs of the growing capacities of AI services

nvidia tesla p40 p4 deep neural network inferencing production accelerator
Image used with permission by copyright holder
Now that Nvidia has addressed the consumer market with its latest graphics cards based on the “Pascal” architecture, the next solutions in the company’s Pascal rollout addresses the deep neural network market to accelerate machine learning. These solutions arrive in the form of Nvidia’s new Tesla P4 and Tesla P40 accelerator cards to speed up the inferencing production workloads carried out by services that use artificial intelligence.

There are essentially two types of accelerator cards for deep neural networks: training and inference. The former should speak for itself, accelerating the training of a deep neural network before it’s deployed in the field. Inference, however, is the process of providing an input to the deep neural network and having it extract data based on that input. That includes translating speech in real-time and localizing faces in images.

Recommended Videos

According to Nvidia, the new Tesla P4 and Tesla P40 accelerator cards are designed for inferencing and include specialized inference instructions based on 8-bit operations, making them 45 times faster in response time than an Intel Xeon E5-2690v4 processor. They also provide a 4x improvement over the company’s previous generation of “Maxwell” Tesla cards, the M40 and M4.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The company said this week during its GTC Beijing 2016 conference that the Tesla P4 sports a small form-factor that’s ideal for data centers. It’s 40x more energy efficient than CPUs that are used for inferencing, and a single Tesla P4 server can replace 13 CPU-only servers built for video inferencing workloads. Meanwhile, the Tesla P40 is ideal for deep learning workloads, with a server containing eight of these accelerators able to replace more than 140 CPU-based servers.

Compared to the previous Tesla M40, the new P40 packs more CUDA cores, higher clock speeds, a faster memory clock, a higher single precision of 12 TFLOPS, and a higher number of transistors at 12 billion. However, the power requirement (thermal envelope) stays the same, thus Nvidia has managed to boost the performance-per-watt level without forcing the card to require more power. The same holds true with the slower Tesla P4 model too when compared to the older Tesla M4 card.

“With the Tesla P100 and now Tesla P4 and P40, NVIDIA offers the only end-to-end deep learning platform for the data center, unlocking the enormous power of AI for a broad range of industries,” said Ian Buck, general manager of accelerated computing at Nvidia. “They slash training time from days to hours. They enable insight to be extracted instantly. And they produce real-time responses for consumers from AI-powered services.”

Nvidia revealed the Tesla P100 during its local GTC 2016 conference five months ago. This card is ideal for accelerating neural network training, delivering a performance increase of more than 12 times compared to the previous generation Maxwell-based solution. Again, neural networks need to be trained first before they’re deployed into the field, and the new Tesla card speeds up the process, cutting AI training down from weeks to days.

In addition to the two new Tesla cards, Nvidia also launched TensorRT, a library for “optimizing deep learning models for production deployment.” The company also introduced the Nvidia DeepStream SDK for simultaneously decoding and analyzing up to 93 HD video streams. However, here’s a brief list of hardware details for Nvidia’s two new Tesla cards that are now avaialble:

Tesla P40 Tesla P4
GPU GP102 GP104
CUDA Cores 3,840 2,560
Base Clock 1,303MHz 810MHz
Boost Clock 1,531MHz 1,063MHz
GDDR5 Memory Clock 7.2Gbps 6Gbps
Memory Bus Width 384-bit 256-bit
GDDR5 Amount 24GB 8GB
Single Precision 12 TFLOPS 5.5 TFLOPS
TDP 250 watts 50 to 75 watts
Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
This software stops spam calls, and it’s 55% off right now
Delete my data?

Since most people's personal information eventually ends up in the hands of data brokers, we highly recommend signing up for an Incogni subscription. Incogni is a data broker removal service that will protect your privacy, with annual plans that will cost $180 or $15 per month for an individual plan and $396 or $33 per month for a family and friends plan where you can invite up for four people.

However, if you sign up through the link below and input the coupon code DIGITALDEAL upon checkout, you'll enjoy a 55% discount, so you'll only have to pay $81, or about $7 per month, for an individual plan for total savings of $99, and $178, or about $15 per month, for a family and friends plan for total savings of $218. Sign up now if you're interested to secure the discount!

Read more
Why you need a VPN — and which one to get
iPhone with VPN service enabled in hand over a blurred background

A virtual private network, or VPN, is a necessity these days to protect your online privacy. A VPN lets you take advantage of high-performance servers that are set up around the world that serve as middlemen while you browse the internet. Whenever you access a website, the data that you send will not have your own address but rather that of the VPN's, making your location difficult to track, shielding you from surveillance and preventing hackers from intercepting your information.

When logging on to a VPN, you choose one of the servers that the service provides. If you choose a server that's located in another country, your computer or mobile device will appear like it's located in that place. This brings up another benefit of VPNs: the ability to get around geoblocking. This will allow people to access certain websites that are banned by their government, but it also has simpler uses such as accessing streaming content that's not available in your region.

Read more
OpenAI just announced a new AI model, and it’s arriving in a couple of weeks
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI’s latest reasoning model, o3 mini, is now official, with the company’s CEO, Sam Altman having recently shared details about the technology on X. He noted the model should be ready for rollout in a couple weeks with availability for API and ChatGPT users up at the same time.

The update comes not long after OpenAI released its o1 and o1 mini model series in December. Those models provided more detailed processing of queries, as well as improved writing, and error detection in code. The upcoming o3 mini model is intended to be an improvement still on those models, with a focus on excelling in challenging science, code, and math queries. The overall intent of the model is to perform as well as a large language model in a lightweight form.

Read more