Samsung Electronics is said to have developed a server memory module called "SOCAMM2," based on low-power LPDDR DRAM, and supplied prototypes to Nvidia.
On the 18th, Samsung Electronics introduced its SOCAMM2 product on its official website tech blog, saying it "combines low-power characteristics based on the latest LPDDR5X with the scalability of a modular structure to present differentiated possibilities from existing server memory." As generative artificial intelligence (AI) spreads, the computations that run it are increasing, and the power consumption required by data centers is also rising. Market demand for low-power memory solutions is also growing. Samsung Electronics developed SOCAMM2 in response to this market demand.
SOCAMM2 is a next-generation module specification that is in the final stages of standardization at the JEDEC Solid State Technology Association (JEDEC). It is being developed with a target of a high-density structure required in data centers and AI servers. It is designed 57% smaller than a conventional DIMM for greater space efficiency, and is said to be more than 20% faster than the previous generation, SOCAMM1. Developed for high-performance AI servers, the module capacity is expected to be 192GB, with speeds of 8.5–9.6Gbps.
While fully leveraging the low-power, high-bandwidth characteristics of LPDDR5X, it can significantly reduce server board space, making it advantageous in next-generation AI server environments where high-performance chips are densely packed. Unlike onboard LPDDR, it adopts a removable modular structure, making replacement and upgrades easier in the event of failure.
From the early stages of SOCAMM2 development, Samsung Electronics reportedly worked closely with Nvidia and entered the customer sample (CS) stage faster than competitors. The CS stage is a key gateway that verifies stability and compatibility in real system environments, and reaching this stage is interpreted to mean it met Nvidia's requirements for power, bandwidth, and thermal management.
The market expects SOCAMM2 to be installed in Nvidia's next-generation AI chip "Vera Rubin." Given Nvidia's influence in the AI accelerator market, securing priority supply rights for Vera Rubin would likely lead to supply for subsequent platforms as well.
The SOCAMM2 market is said to be highly likely to expand rapidly starting in the second quarter of next year, when Nvidia Rubin shipments begin in earnest. The industry also suggests that SOCAMM2 could establish itself, alongside High Bandwidth Memory (HBM), as one of the two pillars of AI memory. A Samsung Electronics representative said, "We plan to further strengthen our server memory lineup and continue to introduce solutions that provide a balanced combination of performance, power, and scalability required by next-generation AI data centers."