Skip to main content

Samsung is about to go all-in on GPUs

3nm chip fabrication at samsung
Samsung

Samsung is boldly moving into the GPU market, signaling a shift in its business strategy. This initiative comes after reportedly receiving the green light for a massive investment plan aimed at expanding its GPU and AI infrastructure. While the company is primarily known for its advancements in memory and storage solutions, this new direction indicates a strategic pivot toward developing cutting-edge GPU technologies.

No, that doesn’t mean it’s going to be developing consumer GPUs. Samsung isn’t expected to create GPUs for PCs that rival Nvidia and AMD anytime soon. Instead, Samsung’s approach will reportedly focus on bolstering its AI capabilities, leveraging GPUs to enhance AI-driven applications.

Recommended Videos

The investment will focus on creating GPUs tailored for AI workloads, benefiting various sectors such as digital twins and lithography processes. Digital twins, which are virtual replicas of physical systems, require immense computational power that specialized GPUs can provide — something Nvidia has talked a lot about in the past. Additionally, in the realm of lithography, GPUs can accelerate the complex computations needed for semiconductor manufacturing, thereby increasing efficiency and precision in a process it’s already heavily involved in.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

According to a separate report by Bloomberg, during a Computex briefing, Nvidia CEO Jensen Huang shared that his company is assessing HBM (high bandwidth memory) solutions from both Samsung and Micron Technology in the future. He noted that while Samsung’s HBM hasn’t failed any qualification tests, it requires additional engineering work. Addressing previous reports about overheating and power consumption issues with Samsung’s HBM, Huang dismissed these concerns, stating there were no significant issues. Despite some unfinished engineering tasks, Huang expressed a desire for these processes to have been completed sooner.

Bloomberg also reported recently on Samsung’s claims about “breakthrough” technology around its next-gen HBM, which will use 3D, or “stacking,” memory to improve efficiency.

Additionally, Samsung’s efforts in GPU development are said to complement its ongoing work with the Exynos series of processors. Exynos, primarily known for powering Samsung’s smartphones and tablets, has faced competitive pressure from other mobile SoCs. By integrating advanced GPU capabilities into Exynos chips for mobile devices, Samsung could potentially boost performance and efficiency, providing a more compelling offering in the mobile market.

All in all, it seems like a bit of a no-brainer for Samsung to get in the GPU game, even if it doesn’t mean we’ll be seeing Samsung GPUs in PCs anytime soon.

Kunal Khullar
Kunal Khullar is a computing writer at Digital Trends who contributes to various topics, including CPUs, GPUs, monitors, and…
You shouldn’t buy these Nvidia GPUs right now
RTX 4060 Ti sitting on a pink background.

Buying a new GPU in this generation is a bit of a tricky minefield of graphics cards to steer clear of. Sometimes, the performance is there, but the value is not; other times, you could get something much more capable for the same amount of money.

While Nvidia makes some of the best GPUs, it's certainly no stranger to that performance vs. value dilemma. Below, I'll show you three Nvidia graphics cards you're better off avoiding right now and tell you their much better alternatives.
RTX 4060 Ti

Read more
GPUs just broke a 25-year-old record
Two RTX 4070 graphics cards sitting side by side.

The PC graphics card market witnessed notable growth in the fourth quarter of 2023, according to Jon Peddie Research. With shipments climbing by 6% to reach 76.2 million units, this surge marks a significant 24% increase year over year, representing the most substantial gain in over 25 years.

Projections indicate a continued upward trend, with an expected 3.6% annual growth rate from 2024 to 2026, potentially culminating in a total installed base of 5 billion units by the end of 2026, with discrete GPUs comprising 30% of the market.

Read more
All RTX GPUs now come with a local AI chatbot. Is it any good?
A window showing Nvidia's Chat with RTX.

It's been difficult to justify packing dedicated AI hardware in a PC. Nvidia is trying to change that with Chat with RTX, which is a local AI chatbot that leverages the hardware on your Nvidia GPU to run an AI model.

It provides a few unique advantages over something like ChatGPT, but the tool still has some strange problems. There are the typical quirks you get with any AI chatbot here, but also larger issues that prove Chat with RTX needs some work.
Meet Chat with RTX
Here's the most obvious question about Chat with RTX: How is this different from ChatGPT? Chat with RTX is a local large language model (LLM). It's using TensorRT-LLM compatible models -- Mistral and Llama 2 are included by default -- and applying them to your local data. In addition, the actual computation is happening locally on your graphics card, rather than in the cloud. Chat with RTX requires an Nvidia RTX 30-series or 40-series GPU and at least 8GB of VRAM.

Read more