Agentic AI, cited as the backdrop for the surge in Intel and AMD shares, is driving a "renaissance" for central processing units (CPUs) that had long been sidelined by graphics processing units (GPUs). Intel jumped more than 20% right after releasing first-quarter results this year, and AMD also climbed in the 10% range alongside a reevaluation of server CPU demand.
On top of that, implementing Agentic AI services requires greater capacity and speed than general-purpose servers, so additional demand is expected to follow the AI-driven memory boom. As AI moves from the "speaking stage" to the "working stage," the outlook that the memory supercycle could last longer than expected is gaining traction.
◇ Agentic AI emerges as a new engine of the AI infrastructure market
Agentic AI, simply put, means "AI that handles tasks on its own." Whereas existing Generative AI stopped at producing answers when asked questions, Agentic AI understands a user's goal, breaks work into multiple steps, finds necessary materials, calls external programs, reviews results, and proceeds to the next action.
For example, while conventional AI would stop at drafting an itinerary when asked to "plan a business trip," Agentic AI searches flights, compares hotel candidates, checks company travel policies, adjusts calendar events, and even prepares an approval request email. The key here is that AI shifts from a "tool for answers" to a "system that performs work."
This shift is boosting CPU demand. GPUs take center stage during AI model training, but once Agentic AI is deployed as an actual service, there is much more for CPUs to do. Processes such as searching documents based on user requests, querying databases, checking security permissions, calling APIs (application programming interfaces), and orchestrating tasks in sequence are typically CPU territory. If GPUs are the compute engine of AI, CPUs are closer to on-site managers that split and consolidation each task.
◇ Ripple effects proven by the numbers… a tailwind even amid a memory boom
Demand for Agentic AI is showing up in the first-quarter numbers of Intel and AMD this year. AMD said first-quarter data center institutional sector revenue was $5.8 billion, up 57% from a year earlier, with strong demand for Epyc server processors contributing significantly to that growth. Intel's data center and AI institutional sector revenue in the first quarter also rose 22% on-year to $5.1 billion. AI investment is not stopping at GPU purchases but is spreading to overall data center infrastructure, including CPUs.
Memory demand is increasing for the same reason. The core of Agentic AI is that it reasons multiple times, stores intermediate results, remembers prior conversations and documents, and continually references search results and business data. In this process, server DRAM becomes AI's workspace, so high-capacity DRAM and NAND flash are required. Memory needs grow not only for running models but also for storing session state, vector DB, search indexes, document caches, and task logs.
A semiconductor industry official said, "If high bandwidth memory (HBM) was the first ignition factor of the memory super boom, Agentic AI is already acting as the second ignition factor that expands and extends the boom to DDR5 and server DRAM," adding, "For CPUs to conduct the orchestra of AI services, not only advanced memory such as DRAM and NAND but also new concepts of memory infrastructure like Compute Express Link (CXL) could come to the fore."