Samsung Electronics is expected to effectively have an exclusive supply of sixth-generation high bandwidth memory (HBM4) for the top-tier model of Nvidia's next-generation graphics processing unit (GPU) for artificial intelligence (AI), Vera Rubin, which will be rolled out in earnest in the second half of this year.
Nvidia has been working behind the scenes with Samsung Electronics to achieve higher performance for the ultra-high-performance model Vera Rubin NVL72. As Samsung Electronics' HBM4, which adopted a more advanced process than rivals, succeeded in pushing maximum performance, it is being assessed as having effectively secured an edge in the high-end lineup.
According to the industry on the 20th, Nvidia plans to divide suppliers for the HBM4 lineup for its next-generation GPU into a general lineup prioritizing "stability" and a performance lineup aimed at AI infrastructure requiring top performance. While the general model will make up the bulk of total supply, there is an outlook that, in terms of profitability, the estimated price of high-end models will reach two to three times that of the previous generation.
In the case of Samsung Electronics, it targeted the highest-end lineup demanding high performance from the early stages of HBM4 development, and it is known to have differentiated itself from competitors as its mass-produced HBM4 surpassed Nvidia's requirements. Unlike SK hynix and Micron, which chose 10-nanometer fifth-generation (1b) DRAM, Samsung Electronics preemptively adopted 10-nanometer sixth-generation (1c) DRAM. The more DRAM fabrication processes advance, the higher not only chip productivity but also intrinsic performance and power efficiency become. This also affects the maximum performance of HBM4.
HBM, designed by stacking high-performance DRAM, by its nature is heavily influenced by the performance of the DRAM itself. As a result, due to DRAM's power efficiency and operating speed, HBM's maximum performance is the biggest variable. The finer the DRAM line width, the higher the density, power efficiency, and productivity. For 1c process-based DRAM, power efficiency is known to be 10%–20% higher and operating speed more than 10% higher than the previous generation. The biggest factor behind Samsung Electronics' HBM4 achieving a maximum speed of 13 Gbps—more than 40% above the international standard—is the adoption of the 1c process.
However, the share of Nvidia's high-end GPUs within the overall AI Semiconductor market remains uncertain. That's because it can vary depending on the direction of capital expenditures by big tech companies such as OpenAI, Google, Meta, Microsoft (MS), and Amazon. If big tech maintains a conservative investment stance, general models could make up the majority of the market rather than high-end GPU demand.
SK hynix, for its part, appears focused more on supply stability than performance. Its overwhelming advantage in the HBM market is working in its favor, and its partnership with Nvidia remains solid. Currently, SK hynix's 1b DRAM production capacity is known to be the most stabilized. Accordingly, there is an expectation that SK hynix's market share will be relatively high in the HBM4 market as well.
That said, time is widely viewed as being on Samsung Electronics' side. As the HBM market gradually levels up and turns into a "weight class fight," Samsung Electronics' production capacity and market influence are expected to stand out. A source familiar with Samsung said, "The most important variables for next-generation HBM are the stability and performance of the underlying DRAM, and if Samsung Electronics' 1c DRAM Production yield reaches a mature level, the market will jolt once more."