On the morning of the 11th, we visited the KT Cloud AI Innovation Center in Mok-dong, Yangcheon-gu, Seoul. After passing the security gate and entering a roughly 1,400-square-foot server room, a heavy fan noise filled the air. In front of us, black racks (Rack, large AI servers) stretched endlessly. In one corner, an AI server equipped with Nvidia's latest graphics processing unit (GPU), the "B200," was carrying out training tasks. It was the heart of the "AI Innovation Center," which KT Cloud built by combining AI data center infrastructure with a showroom.
The AI Innovation Center, which opened that day, reproduces the exact environment of a real AI data center, including AI servers, cooling facilities, networking, and power infrastructure. The company said it is not a simple equipment display but a testbed where corporations and institutions can directly observe and experiment with the design and operation methods of AI data centers they intend to build.
◇ D2C liquid cooling and immersion cooling demonstration stage
What catches the eye first is a large server rack densely packed with B200 GPUs. Looking down at the bottom of the rack, thick and thin coolant hoses were tightly connected right beneath the chips. This was a scene of a direct-to-chip (D2C) liquid cooling system running under real load, applying coolant directly to the chip rather than the GPU surface. Assuming an ultra-high heat server environment at the B200 and NVL72 (Nvidia's ultra-high-density AI rack system that aggregates 72 B200 GPUs in a single rack) level, KT Cloud has long verified coolant flow, pressure, and temperature conditions, and has already commercialized this technology for the first time in Korea at the Gasan AI Data Center.
Right next to it, a transparent tank-shaped device drew attention. It was a full-scale model of an "immersion cooling" system that submerges servers entirely in a special coolant to dissipate heat. The company said that in a proof of concept (PoC) conducted at the Yongsan data center in Seoul, immersion cooling delivered up to 60% power savings versus air cooling and achieved a power usage effectiveness (PUE) in the 1.08–1.33 range. Heo Yeong-man, head of the DC division at KT Cloud, said, "We plan to gradually expand the scope of immersion cooling not only to new data centers but also to existing centers."
◇ Implementing a 'full-stack' AI data center from RoCEv2 networking to digital twin operations
The network and power infrastructure also position the facility as a "next-generation AI data center." KT Cloud teamed up with global networking company Arista to build an AI-dedicated network based on RoCEv2 (a networking technology that enables data exchange between GPUs and servers over IP networks very rapidly and with near-zero latency). Designed on the premise of large-scale communications between GPU servers, this network improves expense efficiency, scalability, and operational convenience compared with existing Nvidia InfiniBand-based configurations.
For power infrastructure, KT Cloud applied a standard AI server rack of its own design. Based on the Open Compute Project (OCP) specifications, a global open-source hardware community, it adopts a high-density power design that handles more than 20 kW per rack and reduces energy loss with a DC 48V direct-current power structure. Another differentiator is the modular configuration of power modules, distribution units, and monitoring devices, allowing customers to easily replace and expand components to meet desired specifications.
Operational automation technology is also a core asset of this center. "Path Finder" builds a digital twin of the entire data center power grid to simulate load and stability, and automatically finds the safest power route during failures or load fluctuations. "DIMS Insight" uses AI to analyze the torrent of data from facility management systems (FMS) encompassing power, cooling, and security to detect early signs of failure and support predictive maintenance. It is designed to reduce in advance downtime (the period when systems, servers, or networks are offline or unavailable), which can be critical in high-density AI data centers.
◇ 24-hour safety checks with autonomous driving robots
Inside the center, a demonstration showed an autonomous driving inspection robot moving between server rooms, filming rack fronts and monitoring temperature, humidity, and smoke in real time. When it detects signs of overheating via a thermal imaging camera, an alert immediately appears on the control screen, and, if needed, the relevant area can be checked remotely. In the long term, KT Cloud plans to leverage such automation technologies to reduce the current data center operations workforce of 60–70 people to about one-third and to further advance a 24-hour non-stop operations framework.
On one side of the center, a demo zone was set up to experience B200-based AI training and an MLOps environment. Visitors could load prepared datasets to run training jobs and watch on screen as the trained model was deployed to a service environment.
Choi Ji-woong, CEO of KT Cloud, said, "The AI Innovation Center is not just a showroom but a core platform for validating future-oriented AI data center technologies," adding, "While serving as a reference data center that domestic corporations can consult when they embark on building AI infrastructure, we will work together to establish a Korea-style AI data center standard."