Meta logo./Yonhap News

Meta is reportedly trial-producing its own semiconductor for training artificial intelligence (AI) systems. As big technology corporations have begun producing custom semiconductors (ASICs) to reduce their dependence on Nvidia, Meta also appears to be accelerating its efforts to secure an AI chip ecosystem with the goal of reducing expenses. An ASIC is a custom chip designed to perform specific functions such as learning and inference.

According to Reuters and others on the 12th, Meta has started production testing of its AI training semiconductor, the "Meta Training and Inference Accelerator" (MTIA). Meta is reported to have partnered with Taiwan Semiconductor Manufacturing Company (TSMC) to manufacture the MTIA. According to reports, after completing the tape-out process of sending the first chip design to the semiconductor factory, Meta has begun distributing test units. It is said that production will be increased based on the test results.

Meta plans to fully introduce the MTIA for AI learning starting next year. AI learning is the process in which an AI model learns patterns by utilizing input data and is able to process new data. While Meta previously introduced custom AI chips, these were used for inference only, operating the recommendation system on Facebook and Instagram feeds since last year. The latest chips are expected to be directly utilized in generative AI like Meta's AI chatbot, "Meta AI."

Reuters, citing sources, explained that Meta's new AI training chip is designed as a dedicated accelerator, optimized to handle only AI-related tasks. The industry expects that chips designed as dedicated accelerators will demonstrate superior power efficiency compared to graphics processing units (GPUs) used for AI tasks.

If Meta successfully produces chips for AI training, it is expected to reduce related expenditures. Meta is one of Nvidia's major clients. After failing in its own chip development in the past, Meta placed a large order for Nvidia GPUs in 2022. This year, Meta indicated that its investment in AI infrastructure could reach up to $65 billion (approximately 94.445 trillion won), which is over half of its annual capital expenditure plan of $119 billion.

Other big technology corporations have also jumped into ASIC production, putting pressure on Nvidia. ASICs have lower costs, power consumption, and total investment costs compared to GPUs. According to market research firm Gartner, the global AI semiconductor market is expected to grow at an average annual rate of 29.2%, from $42.2 billion in 2022 to $196.5 billion by 2028.

Google officially launched the new tensor processing unit (TPU) product, "v5p," created for training its generative AI model "Gemini" last April. TPUs are Google's own AI-exclusive chips. Amazon Web Services (AWS) is training its own AI using the Amazon Tranium 2 (HBM3E) and is developing the next-generation ASIC, "Tranium 3." Microsoft also unveiled the "Maya 100," a chip it designed for AI learning and inference, in November 2023.