Nvidia said in a filing on the 17th (local time) that it had sold all of its remaining ARM equity of about 1.1 million shares (about $140 million). It effectively severed its capital link with ARM, for which it once pursued a $40 billion acquisition in a bid for a "deal of the century."

Jensen Huang, NVIDIA CEO, gives a special address at NVIDIA CES 2026 Live in Las Vegas, USA, in January this year. /Courtesy of News1

Instead, Nvidia executed a strategic investment in rival Intel worth about $7.9 billion (about 11 trillion won, roughly 4% equity), putting its name on the shareholder roster. The market sees this as Nvidia shifting the axis of its strategy from the symbolic rationale of owning design IP to seizing the "real territory" of AI data centers.

Intel's x86 architecture has been relatively sidelined amid the artificial intelligence (AI) boom. While ARM, known for power efficiency, expanded its presence among big tech corporations, x86 drew criticism as "power-hungry and heavy for AI computation." Nvidia also designed its own central processing unit (CPU), Grace, on an ARM basis and has worked to build an independent platform to replace x86. Filling everything from the CPU to the graphics processing unit (GPU) with Nvidia technology to reduce dependence on Intel—so-called vertical integration—had been the core strategy.

But there are hard-to-ignore practical constraints behind Nvidia's recent decision to rejoin hands with Intel (x86). The "roots" of data centers worldwide are still x86-based. Hyperscalers have spent decades building server infrastructure and software ecosystems around x86. Switching this massive infrastructure wholesale to ARM for a single generation of AI servers entails astronomical expense and operational risk. Rather than demanding "replace all your existing servers," Nvidia chose a practical compromise: "We will attach our GPUs most perfectly on the Intel servers you already use."

On top of that, the recently rising "Agentic AI" adds technical justification to this odd cohabitation. Agentic AI, which makes its own judgments and calls multiple tools, requires the CPU's short latency and instantaneous burst performance. Unlike ARM, which is strong at efficiently running many cores, x86, adept at high-clock processing, is seen as relatively advantageous for handling complex logical branches. If the CPU slows in issuing commands, an expensive GPU sits idle waiting for compute instructions, causing a bottleneck—this is where Intel's x86 steps in as a relief pitcher.

In fact, Nvidia recently agreed with Intel to co-develop direct integration of Nvidia's high-speed interconnect, NVLink, into Intel CPUs. This precisely addresses the desire of giant customers like Meta to bring in millions of GPUs while maintaining their existing x86-based server environments. Meta's recent large-scale deal with Nvidia, while sticking to an Intel-based server architecture, clearly shows why Nvidia needed to bring Intel into its camp.

In the end, Nvidia's strategy is to maintain design efficiency through ARM licenses while also absorbing the x86 server ecosystem where real business happens, "making Nvidia the standard wherever it is, in either camp." A semiconductor industry source said, "As Nvidia pulls Intel's manufacturing capabilities and x86's installed base into its own ecosystem, it is setting the stage for Nvidia GPUs to become the de facto standard choice in any server environment."

※ This article has been translated by AI. Share your feedback here.