Illustration = ChatGPT

So-called "neocloud" operators, which buy Nvidia's high-performance artificial intelligence (AI) chips in bulk and rent them out to other corporations, are growing fast. As everyone from big tech corporations running cutting-edge AI models to new startups leans on neoclouds to handle the surging daily demand for AI compute, companies such as CoreWeave, Nebius, and Lambda Labs are expanding their footprint amid the AI investment boom, signing computing contracts worth billions.

Graphic = Son Min-gyun

◇ GPU rental expense one-third that of major cloud corporations

As of the 25th, the neocloud market has emerged as a rising power in AI infrastructure, growing more than 200% over the past year, according to the tech industry. Market research firm Synergy Research said that, as of the second quarter last year, neocloud corporations' total revenue topped $5 billion (about 7.2 trillion won), up 205% year over year. Full-year revenue last year is expected to exceed $23 billion (about 32 trillion won), and the market is projected to grow an average of 69% annually through 2030 to reach nearly $180 billion (about 258 trillion won).

A neocloud is a next-generation cloud operator that rents out infrastructure specialized for high-performance graphics processing unit (GPU) computing to other corporations. Simply put, it is an "AI data center leasing business." While existing hyperscalers—big tech players such as Amazon Web Services (AWS), Microsoft (MS), and Google Cloud—have focused on providing general-purpose cloud computing services centered on central processing units (CPUs), neoclouds have cultivated a niche by concentrating on GPU-only cloud services optimized for AI training and inference. CoreWeave, Nebius, Lambda Labs, Iren, and Crusoe are representative neocloud corporations.

Graphic = Son Min-gyun

In particular, as AI compute volumes surged, a GPU shortage emerged, and it became difficult to meet demand with hyperscalers' general-purpose clouds alone, propelling neoclouds as an alternative to fill the gap. For corporations needing AI data centers, using a neocloud cuts expense and allows flexible responses to unpredictable AI workloads. The biggest advantage is expense, at about one-third that of existing hyperscalers. According to market research firm Uptime Institute, renting Nvidia H100 GPUs from three neocloud firms (CoreWeave, Nebius, and Lambda Labs) averages $34 per hour, roughly 66% cheaper than $98 per hour at three existing hyperscalers.

CoreWeave logo. /Courtesy of Yonhap News

◇ CoreWeave, Nebius, and Lambda sign large contracts with big tech

Keeping pace with this trend, major deals between big tech corporations and neoclouds have poured in.

CoreWeave, a leading neocloud, signed a $142 billion (about 200 trillion won) AI data center computing supply deal with Meta in September last year. With OpenAI, the developer of ChatGPT, it added $6.5 billion in data center capacity purchases in the same month, bringing cumulative purchase capacity to as much as $22.4 billion (about 32.37 trillion won). CoreWeave, which listed on Nasdaq in March last year, counts Nvidia, MS, Nvidia, Cohere, and IBM among its customers.

Nebius (formerly Yandex), headquartered in Amsterdam, the Netherlands, also signed an AI infrastructure deal worth $3 billion (about 4.4 trillion won) with Meta late last year, and Iren, founded in Australia, agreed late last year to provide MS over the next five years with a cloud service using Nvidia's GB300 architecture GPUs for $9.7 billion (about 14 trillion won). Crusoe is participating in Stargate, a large-scale AI infrastructure project led by OpenAI, Oracle, and SoftBank. Crusoe is handling construction of "Stargate 1," a 1.2-gigawatt (GW) data center dedicated to OpenAI being built in Texas.

Most neocloud corporations were founded in 2017–2018 with the goal of cryptocurrency mining and moved to secure GPUs, but when ChatGPT appeared in November 2023 and opened the AI market, they pivoted to AI infrastructure using GPUs.

Nvidia's full-throated support also boosted neocloud operators. As big tech moved to develop their own AI chips to reduce reliance on Nvidia, Nvidia sought to check hyperscalers by backing neoclouds. Through two rounds of investment in 2023 and 2024, Nvidia bought a 6.5% equity stake in CoreWeave and acquired a 0.5% stake in Nebius. It joined Lambda Labs' $480 million Series D in August last year, and in September last year it signed a $1.5 billion deal to supply about 18,000 GPUs to Lambda Labs over four years.

Nvidia CEO Jensen Huang said at the annual developer conference "GTC 2025" in March last year, "With the emergence of agentic AI and more, the compute required for AI has grown to 100 times last year's forecast."

◇ Behind the high growth lie risks of heavy liability and reliance on big tech

Neoclouds are also being used as a way to mitigate the ballooning funding risks brought on by big tech corporations' astronomical AI investments. Corporations can lease AI data centers equipped with advanced GPUs from neoclouds to secure the infrastructure they need right away, and book the money spent as daily operating expense rather than long-term capital investment.

While the market consensus is that neoclouds will continue to grow on the back of the AI investment boom, some warn that excessive liability and heavy reliance on big tech are risk factors. Neocloud operators must constantly invest in bulk GPU purchases and AI data center construction, and heavy depreciation keeps them in the red. Global consulting firm McKinsey noted, "After factoring in labor, power, and depreciation, the gross margin of the GPU rental business is around 14%–16%, a structure lower than that of most retailers."

For now, big tech is using neoclouds to meet AI demand, but if hyperscalers later build dedicated GPU infrastructure, neocloud operators could lose customers that account for a large share of revenue, another risk factor.

Recently, concerns have emerged that neoclouds such as CoreWeave are entangled in "circular transactions" formed around Nvidia, OpenAI, and Oracle, fanning "AI bubble" arguments.

McKinsey said, "For neoclouds to avoid direct competition with hyperscalers and survive over the long term, they must go beyond simple infrastructure rentals and establish a firm position in niche markets," adding, "They need a strategy of actively joining 'sovereign AI' projects by governments seeking to secure AI sovereignty without relying on big tech, or maintaining AI startups—who urgently need expense cuts—as a core customer base."

※ This article has been translated by AI. Share your feedback here.