On the 25th, artificial intelligence (AI) semiconductor startup Mobilint announced the release of the MLA100 (NPU PCIe Card), equipped with the AI semiconductor Eris (ARIES) optimized for deep learning algorithm computations.
The MLA100 is developed based on Mobilint's AI semiconductor Eris, providing over 3.3 times the AI computation performance compared to existing GPUs while significantly reducing power consumption to one-tenth. It achieves a maximum performance of 80 trillion operations per second (TOPS) and surpasses comparable products in actual effective performance.
This product operates at a low power level of 25 watts and boasts high versatility with compatibility in both Linux and Windows environments. With these strengths, the MLA100 can demonstrate excellent performance and energy efficiency across various AI application fields, including AI servers, chatbots, smart factories, smart cities, smart healthcare, and robotics.
The core of the MLA100, Eris, is an NPU that adopts an ASIC architecture optimized for deep learning algorithms, offering industry-leading performance and price competitiveness. Eris supports operations for convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM), as well as transformer-based models, and recently gained support for large language models (LLM) and multimodal models through a software update.
Furthermore, the MLA100 supports over 300 different deep learning models and most machine learning frameworks, based on the hardware architecture and compiler technology that Mobilint has accumulated over the years. It provides high efficiency in deep learning algorithm computations and has completed sufficient verification in proof-of-concept (PoC) projects. By the end of 2024, the initial batch of mass-produced products will be shipped, proving the product's performance and quality.