Samsung Electronics has launched a counteroffensive in the high-bandwidth memory (HBM) race, seen as the "game changer" in the artificial intelligence (AI) chip market.

Samsung Electronics said on the 12th that it has begun mass production and shipments of the world's first sixth-generation HBM (HBM4). While SK hynix has led the HBM market by monopolizing Nvidia orders, industry watchers say Samsung Electronics' HBM4 could create a new inflection point in the competition by leading with performance.

/Courtesy of Samsung Electronics

◇ Samsung chose performance vs. Hynix pursued stability… the tide has turned

At the heart of this showdown is the "process choice" that diverged from the early development stage. Samsung Electronics applied 10-nanometer class sixth-generation (1c) DRAM, one generation ahead of SK hynix, and combined its in-house foundry 4-nanometer (nm) logic process for the logic die that serves as the brain of HBM. The product's differentiator is design technology co-optimization (DTCO) that makes memory and logic work as if they were one. In contrast, SK hynix adopted proven 10-nanometer class fifth-generation (1b) DRAM and used TSMC's 12-nanometer (nm) process for the logic die, choosing a strategy that prioritizes Production yield and process stability.

Analysts say this strategic divergence split the data rate, the core performance metric of HBM4. In the industry, SK hynix's 11.7 Gbps speed achieved with HBM4 is seen as nearing the limit of the existing packaging architecture. Samsung Electronics, on the other hand, secured scalability that enables up to 13 Gbps.

A 1.3 Gbps difference in data processing speed is a decisive variable in AI computing environments. When the per-pin speed rises from 11.7 Gbps to 13 Gbps, total bandwidth per stack soars from about 2.6 TB/s to as high as 3.3 TB/s. This level of performance difference can dramatically reduce data bottlenecks when training ultra-large AI models, shortening training time by tens of percent from a matter of weeks. In fact, as Nvidia recently raised the required specs for its next-generation platform to 13 Gbps, Samsung's "performance-first strategy" appears poised to move beyond a showcase to become the market standard.

An industry official said, "As a result of making an 'aggressive bet' to introduce cutting-edge processes to make up for underperformance in HBM3E (fifth-generation HBM), Samsung broke through the performance ceiling," adding, "In particular, unlike the TSMC–SK hynix alliance, Samsung's 'integrated supply chain,' which shortens process lead times and achieves optimization within a single process, is likely to become a powerful weapon in the era of AI Semiconductor customization."

An image of Samsung Electronics' sixth-generation high-bandwidth memory (HBM4) product./Courtesy of Samsung Electronics

◇ "The key is securing profitability through stabilizing Production yield"

With Samsung Electronics beginning shipments to Nvidia, analysts say the key is securing profitability by improving Production yield. While the 1b DRAM process has matured significantly as commodity DRAM and HBM3E using it have been shipped to the market, 1c DRAM is effectively being commercialized for the first time through Samsung Electronics' HBM4, leading some to suggest that poor Production yield could make it difficult to secure profitability. Because HBM4 stacks 12 DRAM dies, if DRAM Production yield falls below 90%, overall yield drops sharply, inevitably deteriorating revenue.

A semiconductor industry official said, "Samsung Electronics applied advanced processes to gain the upper hand in performance, with the top priority of entering Nvidia's supply chain," adding, "Even if a product has a performance edge, if processes are not stabilized, poor Production yield could worsen profitability, which is a risk."

Aiming to seize the lead in the HBM market, Samsung Electronics expects demand for "custom HBM," tailored to each customer's requirements, to take off around 2027. Leveraging its strength of owning both foundry and memory, it is preparing a strategy to propose, in one package, custom logic dies optimized for ASIC (application-specific integrated circuit) design and HBM. The plan is to cement its status as a "super contractor" design partner that maps out the architecture from the AI Semiconductor design stage, going beyond a simple parts supplier.

※ This article has been translated by AI. Share your feedback here.