A visitor at SEDEX 2025 last year looks over HBM4 at the Samsung Electronics booth./Courtesy of News1

As Nvidia sharply raised the specifications for sixth-generation high bandwidth memory (HBM4), the inflection point in this year's artificial intelligence (AI) memory market, Samsung Electronics and SK hynix again find themselves having to revise next-generation HBM to meet the tough requirements on the competitive stage. Experts say the importance of the logic die (base die) will grow starting with HBM4, and that Samsung Electronics may have a slight advantage due to structural differences: Samsung Electronics has the technology in-house, while SK hynix relies on Taiwan's TSMC. The logic die is the brain of HBM and manages the traffic of power and signals.

In particular, in TSMC's case, domestic and foreign experts broadly agree that it is difficult to handle foundry (contract chip manufacturing) demand surging with the AI Semiconductor boom led by Nvidia, making it hard to not only expand production capacity for HBM logic dies but also carry out process migration investments. This is cited as a weakness for SK hynix, which relies on TSMC for logic die production and processes.

According to the industry on the 15th, in the fourth quarter of last year Nvidia again raised and conveyed its HBM4 supply criteria to Samsung Electronics and SK hynix. In response, Samsung Electronics is revising the design of the logic die and, in collaboration with its foundry division, accelerating development with a focus on heat control and performance improvement.

In general, HBM is a structure that stacks multiple DRAM dies vertically. In HBM4, the logic die is the chip attached at the very bottom of this stack and handles functions that are difficult to execute with DRAM dies alone. The timing and path of data transmission and power management depend on the performance of the logic die.

HBM4's initial standard specification called for a data transfer speed of 8–10 Gb/s, but as Nvidia changed this to 11 Gb/s or higher, the importance of the logic die increased. At such high speeds, heat and power fluctuations generated by HBM can immediately lead to data processing errors, so the specifications of the logic die that controls these must also be improved. It means the HBM performance race has expanded from DRAM to logic die design capability.

A source familiar with Samsung Electronics said, "Raising data transfer speeds can be done by partially modifying the design and process of the logic die, so we don't see it as that difficult," adding, "Reaching the 11 Gbps per pin level presented by Nvidia is possible, but heat control is the problem. That, too, won't take long." Earlier, Samsung Electronics DS chief Jun Young-hyun (vice chairman) also expressed confidence in HBM4's competitive edge in his New Year's address this year.

Experts expect SK hynix to face some difficulty in that process. A professor at a private university in Korea said, "As HBM has advanced through generations, differences in technology have begun to emerge in the logic die, and in the case of SK hynix, it relies to a considerable extent on Taiwan's TSMC for that technology," adding, "From TSMC's standpoint, which already struggles to meet demand, facility investment and line expansion for logic dies are being pushed down the priority list compared to the most advanced process nodes."

※ This article has been translated by AI. Share your feedback here.