SK hynix has moved to strengthen its capabilities in developing customer-tailored memory. As artificial intelligence (AI) services shift their focus from training to inference, the configuration of the semiconductor chips that power them is changing. Global Big Tech firms, such as Google with its tensor processing unit (TPU), are designing in-house chips optimized for their own AI services. As a result, memory specifications optimized for each component that performs each stage of AI are increasingly in demand.
SK hynix recently unveiled a vision called "full-stack AI memory creator" in response to this market shift. The intent is to go beyond simply supplying standard memory and work with customers from the AI Semiconductor design stage to create "customized products" and boost results.
On the 5th, SK hynix opened a new experienced-hire posting to recruit specialists in "custom memory design." Applications will be accepted through the 15th for roles in ▲ high bandwidth memory (HBM) circuit design ▲ physical design ▲ HBM digital design.
SK hynix is particularly hiring experienced professionals in the digital design institutional sector related to "customer-tailored products" across RTL design, front-end, and back-end fields. The aim is to strengthen "memory creator" capabilities across all manufacturing processes in the HBM domain, where it holds a technological edge. HBM is a high-performance DRAM used in Nvidia graphics processing units (GPUs) and is essential for AI computation and inference. The digital design job category optimizes the "base die," the brain that controls the memory, to meet customer requirements.
In the experienced-hire posting, SK hynix said of the "HBM digital design" institutional sector that "the role is to communicate with customers to specify requirements and draft detailed specs," adding that "based on customer requirements, the team conducts RTL (a modeling method focused on data flow and logic operations) design and defines intellectual property (IP) behavior with related departments."
Beyond the digital design institutional sector, SK hynix also emphasized "customer-tailored capabilities" in its other experienced-hire postings. Regarding the HBM circuit design institutional sector, it said it is "an organization that leads HBM technology in collaboration with the world's top AI customers." On the physical design institutional sector, it said, "Based on foundry process design kits (PDK) and electronic design automation (EDA) tools, the team performs work to realize full custom design."
◇ Increase customer touchpoints and strengthen the "custom design" organization
In the "2026 organizational restructuring and executive appointments" carried out on the 4th, SK hynix also focused on expanding its customized memory business. The company set up a dedicated organization for yield and quality in custom (tailored) HBM packaging, saying it is "a change to respond in a timely manner to the expansion of the customized memory market."
It will also increase contact with global Big Tech companies that are designing their own AI chips. The company plans to establish a dedicated HBM technical organization in the Americas to provide prompt technical support to customers. It will also launch a "global infrastructure" organization dedicated to strengthening global manufacturing competitiveness, including building an advanced packaging fab in Indiana, United States.
In major hubs such as the United States, China, and Japan, the company will set up "global AI research centers" to recruit talent and bolster system research capabilities. It will also create a new customer-centric matrix organization called "Intelligence Hub." This organization aims to provide "value beyond expectations" to customers by integrating customer, technology, and market information into an AI-based system.
The "customized memory" that SK hynix is targeting is already being reflected in the industry. A prime example is Rubin, the next-generation AI Semiconductor platform that Nvidia recently unveiled with a goal of mass production in 2026. The product equips the GPU with HBM4 (6th generation), the central processing unit (CPU) with LPDDR5X (low-power DRAM), and the CPX (an AI inference-optimized accelerator) with GDDR7 (graphics DRAM).
A semiconductor industry official said, "The uniform memory architecture of attaching DRAM and NAND to the CPU is breaking down and shifting to 'function-specific memory optimization,' and securing the corresponding design and manufacturing capabilities is becoming the factor that determines future competitiveness."