SKT says on the 4th that at MWC26 in Barcelona, Spain, it signs an MOU with Panesia to jointly develop a CXL-based next-generation AI data center architecture. From left, Jeong Seok-geun, head of SKT AI CIC, and Jeong Myeong-su, CEO of Panesia, pose for a photo in the SKT meeting room at MWC26. /Courtesy of SK Telecom

SK Telecom said on the 4th that at MWC26, the world's largest mobile communications exhibition being held in Barcelona, Spain, it signed a memorandum of understanding (MOU) with Panmnesia, a computing resource consolidation corporations, to jointly develop a CXL (Compute eXpress Link, a data consolidation standard)-based next-generation artificial intelligence (AI) data center (DC) architecture. It is moving to innovate AI DC architecture. As AI models have recently advanced, driving a surge in memory demand, the plan is to improve performance and expense efficiency at the same time by changing the way computing resources are consolidated instead of simply adding GPUs.

CXL is a data consolidation standard that organically consolidates data among CPUs, GPUs, and memory to enable ultra-high-speed, low-latency processing, allowing flexible expansion and use of computing resources that had been tied at the server unit. The core of this collaboration is to use CXL-based technology to boost AI processing efficiency without unnecessary equipment expansion, enhancing the economics of AI DC.

Panmnesia is a domestic startup with global-level capabilities related to CXL. It provides various L.I.N.C semiconductors (communication semiconductors that streamline data movement) needed to build efficient AI DCs, including a fabric L.I.N.C switch (a device that manages data flow by consolidating multiple devices in the middle) and a L.I.N.C controller (a device that aids efficient data transfer between devices).

Existing AI DCs have a fixed architecture in which CPUs, GPUs, and memory are bound at the server unit. Because of this, even if a certain resource was left over on one server, it was difficult to use it on another. In particular, when memory ran short, the inefficiency of having to increase GPUs that were not actually needed kept recurring. This architecture reduces GPU utilization and increases AI DC build and operations expense.

To solve these problems, the two companies will apply CXL-based technology to shift from a fixed, server-unit architecture to one where CPUs, GPUs, and memory can be flexibly consolidated and combined. They will expand the scope of resource consolidation, previously limited to within a server, to the rack unit that groups multiple servers, enabling selective use of needed resources.

Along with this, the two companies will also change how resources are consolidated. Until now, GPU collaborative computing in AI DCs exchanged data over general-purpose networks such as Ethernet, a process that involved data copying and software intervention, causing delays. Collaborative computing, in which multiple GPUs share and merge their computation results, is essential for large-scale AI training and inference.

Instead of these general-purpose networks, the two companies will apply a "scaleup L.I.N.C" to consolidate resources more directly. The scaleup L.I.N.C consolidates resources at high speed without going through a network, simplifying data transmission and improving computational efficiency.

In this collaboration, SK Telecom will lead the design of an architecture optimized for real commercial environments, leveraging its capabilities in building and operating large-scale AI DCs and its experience in AI model development and commercialization. Panmnesia will implement the "Pure Scaleup AI Rack," which expands the scaleup L.I.N.C architecture—previously confined to inside the server—to rack level and beyond, using various L.I.N.C semiconductor technologies.

The two companies plan to run actual AI models and comprehensively validate GPU and memory utilization, latency, and throughput, then unveil the next-generation AI DC architecture by the end of this year. They will then push for commercialization and business development after demonstrations in a real large-scale AI DC environment.

Jeong Seok-geun, head of SKT AI CIC, said, "AI DC competitiveness depends on system optimization that goes beyond GPU performance to include memory and data flow," and added, "This collaboration will ease the structural bottleneck 'memory wall,' where data movement and supply fail to keep pace with higher compute performance, lifting both AI DC performance and economics."

Jeong Myeong-su, CEO of Panmnesia, said, "Next-generation AI infrastructure is determined not by individual equipment performance, but by the 'architecture' created by diverse L.I.N.C semiconductors," and added, "Together with SK Telecom, we will present a standard model for a high-efficiency AI DC that will draw global market attention."

※ This article has been translated by AI. Share your feedback here.