An SK hynix employee works on the production line./Courtesy of SK hynix

SK hynix has formalized development of a next-generation solid-state drive (SSD) with Nvidia. SK hynix has posted massive results by supplying high bandwidth memory (HBM) to Nvidia, and its customer- and service-tailored product development is expanding into the NAND flash sector.

In the high bandwidth flash (HBF) field, where NAND flash is used, SK hynix is working with SanDisk to establish "standards," among other efforts. As artificial intelligence (AI) services shift from training to inference, HBM, a volatile storage device, is facing technical limits. The aim is to solve this through innovation in NAND, a nonvolatile storage device, and supply products that meet global big tech demands.

◇ After DRAM, NAND also speeds up "AI-tailored" development

According to the semiconductor industry on the 16th, Vice President Kim Cheon-seong of SK hynix recently said at the "2025 Artificial Intelligence Semiconductor Future Technology Conference" (AISFC) that the company is developing an SSD with performance 10 times higher than before with Nvidia. Under the name "Storage Next" for Nvidia and "AI-N P" (AI NAND performance) for SK hynix, a proof of concept (PoC) is underway, with the goal of releasing a prototype at the end of next year. The two companies projected that IOPS, which refers to the number of input/output operations per second, could reach 100 million in 2027.

Along with this, SK hynix has been working with SanDisk since Aug. to establish an "HBF standard." HBF has a structure similar to HBM, which stacks multiple DRAM dies to increase bandwidth and greatly widen a kind of data pathway. The idea is to stack NAND in a layered structure to make it suitable for AI services. SK hynix plans to release an alpha version of HBF around late Jan. next year and has set a mid- to long-term strategy to send prototypes to customers for performance evaluation in 2027. Shinyoung Securities predicted that in 2027, when HBF commercialization begins, the market size will start at $1 billion (about 1.4 trillion won) and could grow to $12 billion (about 17 trillion won) by 2030.

It is hard to find cases in previous product development processes where SK hynix built cooperative relationships with various corporations before commercialization. Memory companies like SK hynix have supplied general-purpose products, and big tech firms using them have optimized chips for their own purposes. But in AI services, memory chips tailored to each company have emerged as a factor determining performance, requiring custom development.

At the SK AI Summit 2025 held at COEX in Gangnam-gu, Seoul in November, visitors look at SK hynix memory adopted in the NVIDIA AI accelerator GB300./Courtesy of News1

◇ Why "custom development" came first to DRAM

To understand why semiconductor corporations such as SK hynix, Nvidia, and SanDisk have expanded their customer- and service-tailored development strategies to NAND, we must first examine why HBM has risen in the market. This is because current NAND-based technological innovation is being viewed as an alternative to overcome the "limits of HBM."

Until the emergence of AI, the central processing unit (CPU), which uses serial processing, handled computations needed to run computer operating systems. In this structure, memory provided data required by the CPU and did not need large capacity. However, processing AI—which runs calculations based on massive parameters—in serial created a problem of taking too long. AI, which is based on large-scale matrix multiplication and vector operations, was better suited to the graphics processing unit (GPU), which specializes in parallel computation. In fact, research shows that on TensorFlow, an open-source platform for Machine Learning, running the same training model on a CPU took 17 minutes 55 seconds, while using a GPU reduced it to 5 minutes 43 seconds.

But as GPUs became widely used for AI, a new problem emerged. To maximize GPU computation, a structure is needed that can deliver vast amounts of data continuously. However, traditional DRAM tuned to CPU performance has a sequential transfer structure and low bandwidth, creating "memory idle time" where the GPU waits. This led to a "memory bottleneck," reducing overall GPU computational speed. HBM, which stacks DRAM to increase bandwidth and send large amounts of data to the GPU at once, rose as the solution.

◇ HBM hits limits in AI inference, attention shifts to NAND

GPUs equipped with HBM are widely viewed as having worked quite well through this year by solving many problems in the AI market. This is because major big tech companies have been in the research and development (training) phase of AI. But as AI services enter the commercial stage, "inference," which affects real-world performance, is becoming more important.

For AI that answers user queries based on inference, minimizing latency is a key task. According to the industry, the GPT-4 model used in ChatGPT requires 3.6 terabytes (TB) for inference, but the capacity currently provided by HBM3E (5th generation) to a GPU is about 192 gigabytes (GB). Six to seven GPUs must be grouped to handle an inference request, which has led to an increase in the expense required to provide the service.

Personalized AI services are also cited as a factor pushing HBM's capacity to its limits. For AI to remember a user's behavior and conversations and provide context-aware answers, it must store more data. With HBM, a volatile storage device, it is difficult to respond to AI that has entered the personalization and inference stages.

HBF structure./Courtesy of Growth Research

◇ SK hynix advances "NAND sophistication" across three areas

NAND, a nonvolatile storage device, has emerged as an alternative technology to overcome these limits. It can retain user-specific data for long periods and is also suitable for remembering the "long sequences" required in inference.

SK hynix is responding to these market changes by dividing NAND development for the AI era into three major categories. ▲ "AI-N P," which develops existing SSD performance for AI with Nvidia ▲ "AI-N B" (HBF), in cooperation with SanDisk ▲ "AI-N D," a middle-tier storage that achieves ultra-high capacity (from terabytes to petabytes) and combines SSD speed with HDD economics—to deliver NAND performance suited to the AI era.

Through the "AI-N P" development project, SK hynix aims to secure core technologies that efficiently handle the massive data input and output that occur in large-scale AI inference environments. The company seeks to significantly increase processing speed and energy efficiency by minimizing bottlenecks between AI computation and storage. To this end, it is designing NAND and controllers with a new architecture.

SK hynix VFO technology briefing materials./Courtesy of Growth Research

In the HBF area, the company is pursuing a strategy to accelerate development by applying to NAND the same packaging capabilities that made it No. 1 in the HBM market. Han Yong-hee, a researcher at Growth Research, said, "VFO, SK hynix's signature technology (a semiconductor packaging technology that changes the wires connecting the chip and circuits from curved to vertical), is a new packaging structure that connects vertically along the chip's outer edge instead of penetrating with conventional TSV (a technology that drills holes to stack and connect multiple semiconductor chips vertically)," adding, "It can avoid the Production yield loss that occurs when adding TSV on top of the complex structure of 3D NAND."

In the HBF collaboration structure with SanDisk, SK hynix is responsible for these packaging capabilities. SanDisk is collaborating by providing flash design and large-capacity NAND-based technologies. Han said, "If major GPU companies such as Nvidia adopt the HBF standard, there is a high possibility that the company will emerge as a central player in the dual-memory axes of HBM and HBF."

A semiconductor industry official said, "As NAND, which had been sidelined in the AI market, has emerged as an essential factor in implementing inference AI, full-scale custom development is underway," adding, "If technology as innovative as HBM emerges, we could see reductions in AI inference expense and performance improvements."

※ This article has been translated by AI. Share your feedback here.