Groq, an artificial intelligence (AI) chip startup that Nvidia "acquired by proxy" for about 2.9 trillion won, has asked the Samsung Electronics foundry (contract chip manufacturing) division to ramp up production, according to industry sources. As demand grows for inference AI chips that can maximize performance efficiency per watt, the Samsung Electronics foundry division is expected to accelerate profitability improvements by deepening its collaboration with Groq.
According to the industry on the 9th, Groq recently decided to increase the production volume of AI chips it had outsourced to the Samsung Electronics foundry division last year from about 9,000 wafers to about 15,000 wafers. While last year's output was at the level of making sample chips to see whether they could be properly used for AI inference, analysis indicates that starting this year the company has entered the initial stage of mass production for commercialization.
Groq is an AI chip startup that became known in December last year when Nvidia spent about $20 billion (about 2.9 trillion won) on an "acquisition by proxy." Nvidia said it would cooperate with Groq by signing a "nonexclusive technology license agreement," rather than taking over control of the company. After the license was signed, Chief Executive Officer Jonathan Ross and other executives joined Nvidia to work on integrating Groq's chip designs into Nvidia products. It is understood that Nvidia chose this strategy to avoid antitrust regulation while absorbing key talent, thereby achieving an effect virtually equivalent to an acquisition.
The process of advancing an AI model is typically divided into "training" and "inference." Training is the stage of "learning" patterns from large volumes of data, while inference is the process of "deriving" predictions or conclusions about new data using the trained model. Corporations such as Nvidia and AMD, which currently dominate the AI chip market, mass-produce AI chips specialized for training, but excessive power consumption and the high expense of purchasing chips are driving up demand for inference AI chips that can run AI models more efficiently. The prevailing analysis is that Nvidia, which has dominated the AI chip market for training AI models, acquired Groq by proxy to extend its ecosystem into the inference market.
Although the volume Groq outsourced to Samsung Electronics is not large, the Samsung Electronics foundry division is seen as having aggressively pursued the order to lay the groundwork for winning inference AI chip contracts. In addition to Groq, the Samsung Electronics foundry division also produces the entire volume of processors from HyperAccel, a domestic inference AI chip startup. Samsung Electronics is mass-producing both Groq's and HyperAccel's AI chips using the 4-nanometer (nm, one-billionth of a meter) process.
A semiconductor industry official said, "The 4 nm process that the Samsung Electronics foundry division uses to mass-produce Groq's AI chips incorporates a host of improved steps to enhance chip performance," adding, "Given that process unit costs are high and demand for 4–5 nm processes is the greatest in the industry, it is meaningful even from the standpoint of securing references so as not to fall behind in competition with TSMC. With Nvidia also entering the AI chip market and Groq increasing output, there are projections that the inference AI chip market will open in earnest."
Meanwhile, with reports that Nvidia will unveil an inference-specialized chip based on Groq's AI chip design at GTC 2026, the market's interest in inference AI chips is growing. The industry expects Nvidia to bring to market Groq's inference AI chip design, which mounts static RAM (SRAM) instead of high bandwidth memory (HBM) used in existing AI chips. Using SRAM instead of HBM in AI chips is said to offer advantages such as increasing data transfer speed and power efficiency, as well as lowering chip prices.