A signal has emerged that hints at where high bandwidth memory (HBM) is heading in the recent artificial intelligence (AI) semiconductor market. According to JEDEC, the semiconductor standardization body, on the 29th, discussions are ramping up on the next-generation specification, SPHBM4 (Standard Pseudo HBM4). The message of this standard is clear: the HBM race is shifting from simply pushing performance higher to a phase that also weighs packaging and supply structures capable of supporting it.

A visitor at COEX in Gangnam-gu, Seoul examines the unveiled SK hynix HBM4 at the 27th Semiconductor Exhibition (SEDEX 2025)./Courtesy of News1

HBM has established itself as the core memory that determines AI accelerator performance. As compute grows, how fast and reliably data can be supplied has become more important. Thus far, HBM has advanced by widening data paths. It expanded bandwidth by increasing the number of signal lines traveling between the graphics processing unit (GPU) and memory. Under HBM4 (the sixth-generation HBM), the number of data input/outputs (I/Os) reaches 2,048.

The problem was the next step. As the number of signal lines increased, packaging difficulty rose in tandem. Ultra-fine wiring and precise timing control became necessary, and dependence on silicon interposer-based advanced packaging also grew. In this process, the burden of packaging HBM into an actual product repeatedly outweighed further boosting memory performance.

In this context, TSMC's CoWoS (Chip-on-Wafer-on-Substrate) process is often cited. CoWoS is an advanced packaging technology essential for high-performance AI accelerators, but as demand for AI Semiconductors surged, supply capacity has become tight. As a result, even if memory semiconductor companies hold the initiative in HBM technology competition, actual shipment volumes and schedules have continued to be affected by packaging conditions.

SPHBM4 is a derivative specification that reflects this reality. Rather than pushing performance further, it takes an approach of implementing HBM with lower packaging burden than before. While using the same DRAM as conventional HBM4, it proposes a structure that lowers packaging difficulty by changing how data signals are handled. The goal is to partially ease reliance on complex advanced packaging to reduce expense and design constraints.

The key point is that SPHBM4 is not a "low-cost HBM" or a "downgraded alternative." The memory core die and stack structure are the same as conventional HBM4. What changed is not the memory's intrinsic performance, but the physical method of implementing HBM in a system. It effectively maintains the performance race while offering another option that accounts for packaging realities.

This shift could have significant ripple effects across the industry. There is potential for HBM to expand beyond being a high-priced, AI accelerator-only memory to applications such as server central processing units (CPUs), network chips, and cloud application-specific integrated circuits (ASICs). In other words, the HBM-related market itself could broaden.

This is also a notable point for Korea's memory corporations. Because SPHBM4 uses the same DRAM dies as existing HBM, the three memory companies—SK hynix, Samsung Electronics, and Micron—have room to anticipate additional demand while maintaining premium HBM technology competitiveness. In particular, if packaging constraints are partially eased, the ability to supply volumes more stably could become a competitive edge.

Of course, it is difficult to conclude that SPHBM4 will immediately become mainstream in the market. The standard is still in the finalization stage, and actual adoption depends on customers and the ecosystem. What is clear, however, is that the HBM race is moving beyond a simple contest of performance figures to a stage that also considers packaging realities and market scalability. SPHBM4 can be read as a standard proposal that symbolically illustrates that shift.

An industry official said, "The focus of the HBM race is shifting from simple speed or spec battles to how stably companies can mass-produce and supply," and added, "SPHBM4 is a case that, within this trend, presents a new balance point where memory companies can compete not only on technology but also on production capacity."

※ This article has been translated by AI. Share your feedback here.