Nvidia said on the 17th that it acquired SchedMD. SchedMD is the company that developed Slurm, an open-source workload management system for high-performance computing (HPC) and artificial intelligence (AI). Through the acquisition, Nvidia aims to strengthen the open-source software ecosystem.
Nvidia said it will continue to develop and distribute Slurm as open-source, vendor-neutral software. The company said, "We plan to lead AI innovation across researchers, developers, and corporations," adding, "We will support broader HPC and AI communities to use it across diverse hardware and software environments."
HPC and AI workloads involve complex computations that run parallel jobs on clusters. Queue management, scheduling, and allocation of computing resources are essential. As HPC and AI clusters grow larger and more powerful, efficient use of resources is increasingly required.
Slurm is a job scheduler that can manage scalability, throughput, and complex policies. The top 10 systems in the supercomputer performance rankings use Slurm. This share holds among the top 100 supercomputers as well.
Supported on Nvidia hardware, Slurm is regarded as one of the core infrastructure components needed for Generative AI. Foundation model developers and AI developers use it to manage model training and inference requirements.
Nvidia has worked with SchedMD for more than 10 years. After the acquisition, it plans to continue investing in Slurm's ongoing development. It will also maintain open-source software support and training and development for Slurm for SchedMD's customers.
News of the acquisition followed the unveiling of the Nemotron 3 open model family for developing agentic AI applications. Nemotron supports Nvidia's broad sovereign AI strategy.
SchedMD CEO Danny Auble said, "This acquisition is a definitive example proving the critical role Slurm plays in the world's most demanding HPC and AI environments," adding, "Nvidia's deep expertise and investment in accelerated computing will further strengthen Slurm's development and help meet the needs of next-generation AI and supercomputing."