The surge in memory semiconductor demand driven by the spread of artificial intelligence (AI) services is spreading beyond high-bandwidth memory (HBM) to low-power DRAM (LPDDR). LPDDR is mainly used in smartphones, tablets, and laptops, and is also called "mobile DRAM." Recently, as Nvidia, Qualcomm, and Tesla have actively adopted it in their AI chip designs, a shortage has emerged.
According to the industry on the 28th, each AI server rack that major semiconductor design corporations release this year is expected to be equipped with an amount of LPDDR equivalent to that used in thousands of smartphones. As a result, some analysts say a "panic buying" phenomenon is appearing among smartphone makers trying to secure LPDDR. It is observed that the stronger pricing power of major memory companies such as Samsung Electronics, SK hynix, and Micron is appearing in LPDDR as well, following HBM.
◇ Thousands of smartphones' worth of LPDDR used in a single AI rack
The three memory companies (Samsung Electronics, SK hynix, and Micron) have recently begun full-scale mass shipments of SOCAMM2 products mounted on AI chips. SOCAMM2 refers to a memory module that customizes LPDDR5X for AI chips. LPDDR standards have advanced in the order of 1, 2, 3, 4, 4X, 5, and 5X, and SOCAMM2 is a product that modifies the latest model for server environments for the purpose of mounting in next-generation AI chips.
SK hynix on the 20th formalized the start of mass shipments of a 192-gigabyte (GB) SOCAMM2 based on a 10-nanometer (nm, one-billionth of a meter) class sixth-generation (1c) process. Micron also said in March that it was "mass producing SOCAMM2." Samsung Electronics has hinted at supplying to customers by exhibiting actual SOCAMM2 units at several recent trade shows.
Nvidia is cited as the largest destination for SOCAMM2. Nvidia plans to launch its next-generation AI chip Vera Rubin in the second half of this year. A key feature is bundling 72 Rubin graphics processing units (GPUs) and 36 Vera central processing units (CPUs) into a single rack to boost performance.
SOCAMM2 is mounted on the Vera CPU. Nvidia said it supports up to 1.5 terabytes (TB) of LPDDR5X per Vera CPU. The structure puts eight 192GB SOCAMM2 modules into a single Vera CPU. Converted to a rack basis, that means more than 50TB of LPDDR5X is installed. Compared with the previous generation product (Blackwell, 17TB LPDDR5X), the capacity has increased about 3.2 times.
Typically, a premium smartphone mounts 12GB of LPDDR5X. By simple arithmetic, one Vera Rubin rack alone would use an amount of LPDDR5X equivalent to about 4,500 smartphones. An industry official said, "AI chips are acting like a 'black hole' sucking in low-power DRAM," and added, "The LPDDR shortage is expected to become more pronounced from the second half of this year when Vera Rubin launches."
LPDDR is also used in Tesla's and Qualcomm's next-generation AI chips, likely deepening the shortage. It is known that Elon Musk, Tesla's chief executive officer (CEO), recently released a tape-out (design completion and process input) for the next-generation AI autonomous driving chip AI5, which mounts 192GB of LPDDR5X on a single chip.
Qualcomm, too, has seen LPDDR demand rise as it expands its business from traditional mobile chips to AI data center solutions. It has said each inference-focused AI200 card slated for release this year supports 768GB of LPDDR. Some interpret Qualcomm CEO Cristiano Amon's recent visit to Korea to meet with Samsung Electronics and SK hynix executives as being aimed at resolving intensifying instability in LPDDR supply and demand.
◇ Demand surges from smartphones to AI chips… contract prices soar
As the three memory companies prioritize LPDDR supply to high-margin AI chip makers, the smartphone industry has been put on alert. The recent negotiations between Samsung Electronics and Apple over pricing and initial supply of 12GB LPDDR5X are cited as a prime example of this trend.
Samsung Electronics is said to have recently discussed pricing and initial volumes for LPDDR5X used in the iPhone 18 series. Word is Apple accepted Samsung Electronics' proposal even though it was roughly double previous prices. The per-unit price of 12GB LPDDR5X rose from the $30 range early last year to about $70 (about 100,000 won) early this year, and Apple accepted such a price increase.
With AI chip demand added on top of existing smartphone demand, LPDDR prices are also rising quickly. Market research firm TrendForce estimates contract prices for LPDDR4X and LPDDR5X in the first quarter rose about 90% from the previous quarter and "appear poised to post the highest growth rate on record."
The rise in LPDDR prices is also a headache for fabless (semiconductor design) corporations. A senior official at a domestic fabless firm, who requested anonymity, said, "The price to buy one LPDDR5X now would have bought 16 last year," and added, "Even if we bear this price increase, there is little volume in the market, which is a major concern."
Soaring demand for SOCAMM2 in the AI market is attributed to ever-increasing performance requirements. With the conventional AI chip configuration of deploying HBM and double data rate (DDR)5 registered dual in-line memory modules (RDIMMs), it has become difficult to meet performance, power, and form factor needs all at once. Efforts are underway to address this with LPDDR. HBM attached to GPU packages provides ultra-high bandwidth but has limitations in capacity, cost, and heat. DDR5 RDIMM has large capacity but carries structural constraints with relatively lower bandwidth and power efficiency.
SOCAMM is a structure that stacks and mounts multiple LPDDR dies into a module and places them at high density near the package. Unlike conventional LPDDR, which was soldered directly (onboard) to the mainboard, it maintains high bandwidth and low power while allowing attachment, removal, and replacement. It is closer to system memory than HBM, but in terms of bandwidth and latency, it can sit much closer to GPUs and CPUs than RDIMMs, serving as a "middle tier." As AI models grow and parameters surge, creating capacity and power constraints that HBM alone cannot handle, SOCAMM is emerging as a "future growth engine" by acting as a buffer.
A semiconductor industry official said, "As SOCAMM2 emerges as a new tier that bridges the gaps in performance, power, and form factor between HBM and system memory, adoption is increasing."