Intel's stock has surged nearly 30% and the semiconductor industry is buzzing after Nvidia announced it would invest $5 billion (6.93 trillion won) in Intel and seek synergy between Intel central processing units (CPUs) and Nvidia graphics processing units (GPUs).
In particular, the key point of this announcement is that Nvidia Chief Executive Officer Jensen Huang emphasized "the historic union of the Intel x86 architecture ecosystem and the Nvidia artificial intelligence (AI) ecosystem," signaling that two giant corporations that stand at opposite ends from a chip design perspective are moving to seek synergy.
The backdrop for Huang stepping up with a massive investment and even technical support for Intel appears to be a gesture to bolster the Trump administration's key agenda of reviving the U.S. semiconductor industry. Nvidia is taking a stance of cooperating as much as possible with President Trump to sell its GPUs in China.
The collaboration between Nvidia and Intel is expected to be a strong boost for Intel, which had long failed to adapt to the AI era and was on the decline. Still, it remains to be seen whether it will translate into technical gains and tangible business benefits. No matter how forward-leaning Nvidia's support may be, whether the chip co-developed by the two will be chosen in the enterprise and consumer markets is a separate matter.
◇ Will Intel x86 and Nvidia CUDA create synergy
Intel, once the dominant force in the data center CPU market, has been on a downward trajectory after the AI investment boom shifted the paradigm toward Nvidia GPUs, a shift it failed to adapt to. There are various reasons, but fundamentally, high-performance CPUs based on the x86 architecture—virtually the identity of Intel CPUs—failed to deliver notable efficiency for AI computation.
By contrast, Nvidia has seized the data center market with GPUs built on its proprietary parallel computing platform specialized for AI computation, CUDA (Compute Unified Device Architecture). This has not only eroded market share for data center CPUs—which made up most of Intel's operating profit—but also led to lower utilization rates at Intel's semiconductor production lines, and its heavily invested foundry (contract chip manufacturing) business has continued to rack up large losses without notable results.
Why did Intel's x86-based CPUs lose out to Nvidia in the AI data center market? Structurally, CPUs have a small number of powerful cores and are specialized for logic operations, sequential processing, and control and management of the overall system. They focus on the efficiency of single tasks and minimizing latency rather than raw data throughput.
GPUs, by contrast, have thousands of small cores and excel at processing large volumes of simple, repetitive operations simultaneously. Developed for 3D graphics rendering, they are optimized to process massive amounts of data in parallel. As a result, Intel CPUs increasingly ended up confined to a mere "manager" role in AI data centers, while Nvidia GPUs became entrenched as the main compute workhorses.
On top of this, Nvidia has focused on developing CPUs optimized for GPU performance instead of x86-based CPUs so its GPUs could fully shine in the data center market. Its long partnership with Arm is a prime example. Nvidia had been encroaching on Intel's turf by developing Arm-based CPUs such as Grace, its successor Vera, and the "N1" system-on-chip (SoC) for Windows PCs.
◇ Intel gains an ally, but whether it will translate into market results is uncertain
The partnership is encouraging news for Intel, but there are many hurdles to clear technically. The top priority is overcoming data bottlenecks stemming from the structural issue that x86 CPUs and Nvidia GPUs use separate, independent memory. GPUs use high-speed VRAM (video RAM), while CPUs use system RAM. To share data, copies are required, which increases latency and power consumption.
Huang also said in the announcement that they will apply "Speedy Link," a technology that binds Intel's custom CPUs and Nvidia's GPUs into one to boost data processing speeds, signaling the intent to unify the two companies' heterogeneous architectures. By enabling Intel to use a technology that had been available only on Nvidia servers, both companies can now expect new revenue streams in the AI server market.
However, questions remain over whether the outcome of the two companies' collaboration will have the technical and price competitiveness that would win over IT corporations. In AI data centers, soaring power expense is the biggest problem, and Intel's high-performance CPUs, geared toward general-purpose computation, have lower power efficiency per unit performance than GPUs. GPUs themselves consume a lot of power; using the two chips together raises per-rack power density and thermal management costs sharply. Cooling and power infrastructure burdens increase at the data center facility level.
A key question for Intel is whether Intel Foundry, its biggest headache, can rebound through the two companies' cooperation. Intel is struggling to secure external customers for its 1.8-nanometer-class Intel 18A and 1.4-nanometer-class Intel 14A processes. Huang also sidestepped whether the partnership would help grow Intel's foundry business. In other words, there is no guarantee Nvidia will use Intel Foundry.
Some foreign media have expressed skepticism. The Wall Street Journal (WSJ) noted, "With this investment, Intel has secured much-needed cash and, through joint chip development, has moved closer to the heart of the AI boom that Nvidia has largely monopolized," but added, "This is merely a tactical victory; what Intel needs more is structural change." It also said, "Intel should partition its foundry business to increase the likelihood of winning orders from external customers."