Samsung Electronics will unveil a physical chip of its next-generation High Bandwidth Memory (HBM), seventh-generation HBM (HBM4E), for the first time at Nvidia's developer conference (GTC) 2026 in the United States. By showcasing next-generation HBM technology ahead of rivals at the world's largest artificial intelligence (AI) ecosystem event, the move is seen as an attempt to expand its presence in Nvidia's next AI chip supply chain.
Samsung Electronics said it will participate in Nvidia GTC 2026, held in San Jose, California, from the 16th to the 19th (local time), and unveil its HBM4E chip and core-die wafer. The exhibition also introduced its HBM4 product, which achieved the industry's first mass production and shipment in February.
HBM4E is a next-generation High Bandwidth Memory (HBM) that significantly boosts performance as the successor to HBM4. Samsung Electronics is developing HBM4E that applies 1c DRAM processing and a 4-nanometer-based base die, supporting up to 16 gigabits per second (Gbps) per pin and up to 4TB/s bandwidth.
In the current AI HBM market, many say SK hynix is ahead in Nvidia's supply chain with a focus on HBM3E. However, at this GTC, Samsung Electronics preemptively unveiled HBM4E, a next-generation technology, highlighting a faster development pace than competitors.
The generational shift in the AI Semiconductor market is also gaining momentum with the emergence of HBM4. Samsung Electronics last month became the first in the world to mass-produce and ship HBM4, launching a counteroffensive in the next-generation HBM race. HBM4, which delivers significantly improved speed and power efficiency over HBM3E, is cited as a key memory to be mounted in next-generation AI GPUs.
SK hynix is also preparing to switch to HBM4, but Samsung Electronics appears to be narrowing the technology gap quickly by moving up its mass production timeline. Micron is targeting the AI memory market with HBM3E products, but is seen as somewhat behind Korean corporations in the HBM4 development schedule.
It is noteworthy that competition in AI Semiconductors is expanding from GPU-centric to system-level, encompassing memory and packaging. In particular, Nvidia's next-generation AI platform "Vera Rubin" integrates GPU, CPU, memory and storage into a single system, and suppliers adopted for the platform are likely to secure stable demand in the next-generation AI data center market. Analysts say Samsung Electronics, which possesses memory, foundry and packaging capabilities simultaneously, can secure competitiveness amid this shift.
At this event, Samsung Electronics showcased its semiconductor development capabilities that combine memory, foundry and packaging technologies through the "HBM4 Hero Wall" exhibit. It also displayed memory solutions that make up Nvidia's AI platform, including ▲ HBM4 for Rubin GPUs ▲ SOCAMM2 for Vera CPUs ▲ PM1763 SSDs for servers.
On the second day of the event, Song Yong-ho, head of the Samsung Electronics AI Center, will take the stage as a presenter by special invitation from Nvidia. Song is expected to outline Samsung's memory strategy supporting Nvidia's next-generation systems and the direction of its AI infrastructure technologies.
A semiconductor industry official said, "SK hynix was ahead in the existing HBM3E market, but as Samsung Electronics followed its world-first mass production of HBM4 by unveiling HBM4E, the next-generation competition is heating up," adding, "As competition that had centered on GPUs expands into system-level rivalry that includes memory and packaging, Samsung's competitiveness is coming to the fore."