Skip to main content

AMD flashes a mug-sized, cube-shaped device packing four 'Vega' graphics chips

In addition to showcasing the power of its upcoming graphics cards based on its new “Vega” chip design — by running Doom at an Ultra 4K resolution and a 60Hz refresh rate — AMD also revealed a coffee mug-sized, cube-shaped device packed with four Vega-based prototype boards. It’s currently dubbed the Vega Cube or Koduri Cube, although there’s no official name for it just yet. However, the device is slated to provide 100 teraflops of half-precision computing power for deep learning and machine learning scenarios.

The device was introduced during AMD’s recent Tech Summit 2016 conference. That introduction wasn’t official, so there’s no indication of when the product will be made available for the deep learning market. Moreover, the model shown on stage didn’t even contain actual Vega-based GPUs. Instead, AMD’s head of the Radeon Technologies Group, Raja Koduri, said the solution uses a special interface that the company isn’t revealing quite yet.

Recommended Videos

There’s speculation that the Vega Cube serves as AMD’s response to Nvidia’s NVLink technology. This is a communications protocol developed by Nvidia that establishes a direct connection between the company’s graphics chips and a CPU, and between more than one Nvidia-made GPU. It’s meant to provide faster communication lanes than the slower PCI Express method, pushing between 5 to 12 times more bandwidth. The tech is built into Nvidia’s latest graphics chips based on its “Pascal” design, and aims at the high-performance computing market.

For instance, one graphics card that takes advantage of NVLink is Nvidia’s Tesla P100 for the data center. It’s meant for parallel computing, meaning it works alongside the system’s processor to help handle computing loads. When installed in servers optimized with Nvidia’s NVLink technology, the card provides 10.6TFLOPs of single-precision performance, 21.2TFLOPs of half-precision performance, and NVLink communication speeds of up to 160GB per second.

As for the Vega Cube, each of the four Vega-based GPUs will provide 25TFLOPs of half-precision computing performance. This cube-shaped solution would presumably be installed vertically, unlike Nvidia’s Tesla P100 that’s installed horizontally. This is mostly speculation for now given the device is only making headlines thanks to reports stemming from the show, not through AMD.

Reports of the cube-shaped computing device arrives alongside the introduction of Radeon Instinct, a passively cooled accelerator card solution optimized for machine learning, deep learning frameworks, and their related applications. There’s a full-sized MI6 model packing 5.70TFLOPs of performance, 224GB per second memory speeds, and a power requirement of around 150 watts. The MI8 model is a small form factor card with 8.2TFLOPS of performance, 512GB per second memory speeds, and a power draw of around 175 watts.

AMD also introduced the MI25 model based on its new Vega graphics chip design. This is a high-performance accelerator card built specifically for training artificial intelligence. The company doesn’t provide specs, but the general summary lists a power draw of around 300 watts, “2x packed math,” and a high bandwidth cache and controller.

AMD is scheduled to host an event on Tuesday showcasing the performance of its upcoming Zen-based Summit Ridge desktop processors. The solutions mentioned here likely won’t be part of the show, but there’s high indication that the company will reveal a Vega-based graphics card for the high-end PC gaming market. AMD, it seems, will have a very busy 2017.

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
Key ChatGPT and Gemini features compared. Who did it better?
Opera Mini Aria AI chatbot vs ChatGPT and Google Gemini running on Android phones resting on a blue fabric sofa.

The AI industry has blossomed quickly in recent years, and several companies have been in steep competition with one another. Two brands that have especially been neck and neck are OpenAI and Google. These two companies have many services in common within the AI game. Notably, OpenAI has its ChatGPT chatbot and Google has its Gemini tool as flagship features; however, each brand has since launched additional AI services under their respective umbrellas. 

Here’s a rundown of the functions and features that ChatGPT and Gemini have in common, and which are ideal to use.

Read more
Microsoft’s 12-inch Surface Pro has finally given me hope for Windows on tablets
Microsoft Surface Pro 12-inch color options.

I am hopelessly allured by the idea of tablet computing. So much, that I have spent years trying to find productivity hacks on iPadOS and somehow made it work. When I tested the Asus ProArt PZ13 earlier this year, I realized that Windows has, after all those years of floundering, reached a new height riding atop Qualcomm’s Snapdragon silicon.

”It’s a fantastic, low-on-compromise tablet that punches far above the iPad Pro on productivity metrics,” concluded my experiment with the Asus slate, running a next-gen flavor of Windows on Arm. It seems Microsoft is finally ready to deliver the knockout punch, and in more ways than one.

Read more
It might be a while longer before you can easily cancel subscriptions
The FTC logo on a building.

The Federal Trade Commission had voted in a rule that would make it easier to cancel subscription services, but the start of that rule has been pushed back until July 14. Initially, the regulation — called the Negative Option Rule — went into effect on January 19, but certain provisions weren't set to kick in until May 14. These provisions would require companies to make it as easy to cancel a subscription as it is to sign up.

Numerous telecom companies spoke out against the ruling. The National Cable and Telecommunications Association filed a lawsuit to appeal the decision, claiming that the FTC had overstepped the limits of its authority. The decision to delay these provisions by 60 days is due to the "complexities" of changing the processes, and the FTC says it has "acknowledged that compliance entailed some level of difficulty" and "determined that the original deferral period insufficiently accounted for the complexity of compliance."

Read more