AMD Chief Executive Officer Lisa Su unveiled a new server rack product, saying it will "respond to rising global artificial intelligence (AI) demand."
On Jan. 5 (local time) at the Venetian in Las Vegas, Su used the opening keynote at "CES 2026" to unveil the next-generation AI data center rack "Helios." Taking the stage a day before CES 2026 opened, Su was the first of this year's 11 keynote speakers to outline global technology shifts and present the company's business vision. Keynote speakers also introduce the overarching theme of CES.
Su delivered the keynote under the theme "AI Everywhere, for Everyone." She said, "At this year's CES, the industry is joining forces to show what becomes possible when we realize 'AI everywhere, AI for everyone.'"
Su's keynote featured OpenAI co-founder Greg Brockman, World Lab CEO Fei-Fei Li, and Blue Origin Senior Vice President John Couluris. She also invited White House Office of Science and Technology Policy Director Ikle Kratchios to underscore that AMD is participating in the U.S. government's AI plan, the "Genesis Mission."
Su said, with AI adoption accelerating and unprecedented growth in both training and inference, "we are entering the yotta-scale computing era." Global computing capacity rose from 1 zettaflops in 2023 to 100 zettaflops last year, and within five years it will increase another 100-fold, requiring 10 yottaflops, she said.
FLOPS refers to the number of floating-point calculations a computer can perform per second. Zetta means 10 to the 21st power. One zettaflops means complex operations are performed 10²¹ times per second. Yotta means 10 to the 24th power. Su cited the spread of AI services as the backdrop for this surge in computation. Since the launch of ChatGPT, AI users have surpassed 1 billion and are expected to grow to 5 billion, she said, making technical preparation necessary.
Unveiling Helios, Su called it "a product that prepares for the yotta era." Helios integrates 72 Instinct MI455 graphics processing units (GPUs) and 18 Venice data center central processing units (CPUs) into a single system to boost performance. Built on a 2-nanometer (nm; 1 nm is one-billionth of a meter) process, the MI455 is equipped with 432GB of 4th-generation high-bandwidth memory (HBM), improving AI inference performance about tenfold over the previous generation. Su said, "As the world's best AI rack, this product is not a simple server rack but a 'monster.'"
AMD also introduced the Ryzen AI 400 and Ryzen AI Pro 400 series processors for personal AI PCs. It also unveiled Ryzen AI Halo, a mini PC platform for AI developers that is the size of a small box. These products are scheduled to roll out sequentially in the first and second quarters of this year.
It also showcased the Ryzen AI Embedded P100 and X100 series for automobiles and industrial devices. They are slated to power in-vehicle infotainment systems, Autonomous Driving, and industrial automation robots. In the gaming sector, the company said it plans to release the Ryzen 7 9850X3D processor in the first quarter of this year.