Elon Musk, the CEO of Tesla, who focuses on artificial intelligence (AI) development, expressed his ambition to rapidly enhance the performance of the AI chatbot 'Groq' from his established company xAI.
On the 22nd (local time), Musk wrote on X (formerly Twitter) that "xAI's goal is to operate 50 million AI computational units equivalent to the H100 class online within five years," adding, "However, the power efficiency will be much better." The H100 is a high-performance AI semiconductor chip from NVIDIA.
On the 23rd, Musk demonstrated his commitment to accelerating AI development by posting this message as a pinned item on his account's homepage. Previously, Musk shared photos of the internal facilities of the xAI data center 'Colossus 2,' which is currently under construction in the United States.
Musk stated that "23,000 GPUs, including 30,000 GB200s, are currently operating in a single supercluster called 'Colossus 1' to train Groq," and noted that "the first batch of 550,000 GB200s and GB300s for training will go online in a few weeks in Colossus 2." The GB200 and GB300 refer to NVIDIA's latest AI platforms based on Blackwell.
He emphasized that "as Jensen Huang said, xAI cannot compete with anyone in terms of speed." Musk shared a video where Jensen Huang, the CEO of NVIDIA, mentioned in an interview last year that the speed of AI infrastructure development by xAI was astonishing, stating, "xAI accomplished what takes others a year in just 19 days," adding, "That is superhuman, and as far as I know, the only person in the world who can do that is Elon Musk."
The U.S. daily Wall Street Journal (WSJ) reported the day before, citing sources, that as xAI rapidly depletes its funds to build a foundation for AI development, Musk is seeking to raise an additional $12 billion (approximately 16.6 trillion won) to purchase AI chips.