Facebook parent Meta's in-house AI chip Meta Training and Inference Accelerator (MTIA) /Courtesy of Meta

Meta, Facebook's parent company, unveiled four in-house artificial intelligence (AI) chips on the 11th (local time). By newly introducing its self-made AI chips after signing large-scale AI chip supply contracts with Nvidia, AMD, and Google, Meta dispelled recent speculation that development was faltering.

Meta introduced four models in its in-house AI chip lineup, the Meta Training and Inference Accelerator (MTIA): MTIA 300, 400, 450, and 500, through a blog post that day. Song Lee-joon, Meta's vice president of engineering, told CNBC that "chips designed by Meta are manufactured by Taiwan's TSMC," noting that this approach can improve price-to-performance across data centers compared with relying only on external semiconductor corporations.

Among the newly revealed AI chips, the MTIA 300 has already gone into production, with some units deployed in data centers. The MTIA 300 is optimized for models that recommend content or ads on Meta's social media (SNS) platforms such as Facebook and Instagram.

The MTIA 400, 450, and 500, which will be released going forward, are planned to be deployed to data centers every roughly six months through next year. The MTIA 400, known by the codename "Iris," is a chip that supports Generative AI models. Tasks such as generating images or videos based on user requests fall into this category.

The MTIA 450 and 500 are chips specialized for AI inference, characterized by a significant increase in the bandwidth of high bandwidth memory (HBM), which is critical to inference performance.

Regarding its dual-track approach of adopting external chips such as Nvidia and AMD graphics processing units (GPUs) while also producing in-house chips, Meta said, "Mainstream chips are designed for the most demanding task, AI training, so they are less expense-efficient for tasks like inference," adding, "By contrast, MTIA is optimized for inference."

Vice President Song said, "AI models are evolving faster than the traditional chip development cycle," adding, "We decided to iterate and improve rather than bet long-term on a single design," explaining the strategy behind setting a short six-month development cycle. The strategy is to adopt external chips for training and produce in-house chips for inference to improve efficiency.

However, Song also expressed concern about the global memory chip shortage. Song said, "We are worried about the HBM supply (shortage) situation," but added, "We believe we have already secured sufficient (memory) volume to match planned production."

※ This article has been translated by AI. Share your feedback here.