Samsung Electronics plans to more than triple its output this year of high-bandwidth memory (HBM) for artificial intelligence (AI) data centers compared with last year.
Vice President Hwang Sang-jun, head of memory development at Samsung Electronics, met with reporters on Mar. 16 local time at Nvidia's annual developer conference, GTC 2026, in San Jose, California, and said, "We are ramping up sharply, and there are no major problems with production."
He added, "Our goal is for HBM4 to account for more than half of total HBM," and said, "If supply is a bit tight, concentrating supply on premium products is better for the industry as a whole."
On the global memory shortage, the company also noted that strategic supply is unavoidable. Hwang said volumes will have to be allocated by distinguishing between strategic partners and customers for mass-production supply.
It also unveiled a next-generation process roadmap. While the base die of the sixth-generation HBM4, now in mass production, and its follow-on seventh-generation HBM4E apply the same 4-nanometer process, starting with HBM5 and HBM5E the company plans to use Samsung Foundry's 2-nanometer process.
It said the stacked chips (core dies) applied to HBM5 and HBM5E will use 10-nanometer-class 1c (sixth generation) and 1d (seventh generation) processes, adding, "There is a cost burden, but to match the products and concepts HBM aims for, using leading-edge processes is unavoidable."
Samsung Electronics is pursuing a strategy to release new HBM products annually in step with key customer Nvidia's AI chip launch cycle.
The "Groq 3" inference-only chip, for which Nvidia CEO Jensen Huang said he was "thankful to Samsung," is manufactured at the Pyeongtaek campus.
Hwang said the goal is to begin mass production of the chip between late in the third quarter and early in the fourth quarter this year, and that more orders than expected have already been secured.
He also emphasized that Groq had been a Samsung Foundry customer even before it signed a licensing agreement with Nvidia. He said that after the agreement with Nvidia, the company decided to maintain the existing production setup based on product satisfaction.
Groq 3 is a large die with an area exceeding 700 square millimeters, yielding about 64 chips per wafer. Compared with the typical 400 to 600 chips produced per wafer, its integration density is much lower.
Instead, 70% to 80% of the chip is composed of SRAM to reduce reliance on external HBM and to enable fast inference on the chip itself.