NVIDIA logo./Yonhap News

With NVIDIA reportedly developing a new low-cost artificial intelligence (AI) chip targeting the Chinese market, memory semiconductor corporations that supply high-bandwidth memory (HBM) are facing complicated calculations. Until now, HBM suppliers have contracted their supply volumes in advance to align with NVIDIA's AI semiconductor Blackwell and the next-generation Rubin launch schedules, planning their production capacities accordingly. However, as it is anticipated that NVIDIA will inevitably request the supply of older HBM for the AI semiconductors designated for the Chinese market, the possibility of needing to change production plans is growing.

On the 20th, Reuters reported that "NVIDIA is developing the 'B30A,' which surpasses the performance of the AI semiconductor 'H20' supplied to the Chinese market." While specific specifications have not yet been finalized, NVIDIA is reportedly pushing to provide test samples to Chinese customers as early as next month. NVIDIA stated, "We are evaluating various products within the scope allowed by the government," and noted, "All products we offer have received the approval of relevant authorities and are designed solely for beneficial commercial purposes."

The B30A that NVIDIA supplies to China is said to be based on the design of the recently launched Blackwell platform. Although the specifications of the HBM integrated with the B30A have not been disclosed, the industry expects that it is highly likely to be an HBM3E 8-layer product. President Trump mentioned during a White House briefing on the 12th (local time) that there is a possibility of concluding a contract for a slightly lower-performing Blackwell processor, stating that if the performance were lowered by 30-50% compared to existing Blackwell products, exports to China could be permitted. Given that HBM3E 12-layer is applied to NVIDIA's Blackwell platform, expectations indicate that HBM3E products will be used.

There are no technical problems for memory semiconductor corporations to produce the older HBM requested by NVIDIA. However, the corporations supplying HBM to NVIDIA have already embarked on plans to concentrate their production capabilities on developing advanced, next-generation HBM, which could disrupt their production plans. HBM is different from general-purpose DRAM and NAND flash memory, as memory semiconductor corporations cannot adjust their production volumes independently. Generally, one year in advance, AI semiconductor design companies like NVIDIA, AMD, and Broadcom finalize their supply volumes according to their product schedules. Once contracts are finalized, memory semiconductor corporations expand their production capacities and invest in mass production equipment and personnel.

Currently, SK hynix and Micron are focusing their production capacities on mass-producing HBM3E (5th generation HBM) 12-layer products. From the second half of the year, they plan to aggressively invest in facilities for mass-producing HBM4 (6th generation HBM). In an effort to recover from the sluggishness of not entering NVIDIA's HBM3E supply chain, Samsung Electronics has also begun investing in facilities for HBM4 mass production.

A semiconductor industry official noted, "Existing NVIDIA suppliers, such as SK hynix and Micron, have planned to focus on preparing for mass production of the latest HBM products like HBM3E 12-layer and HBM4 to maximize revenue. They may face difficulties responding to the demand for older HBM production for low-cost AI chips," adding, "However, for Samsung Electronics, which has surplus production capacity, this could act as an opportunity. Since NVIDIA's revenue share from the Chinese market exceeds 10%, the supply volumes are expected to be significant."

※ This article has been translated by AI. Share your feedback here.