A conceptual diagram of research by Jeon Sang-hoon, School of Electrical Engineering, KAIST, and his team./Courtesy of KAIST

From smartphone cameras to Autonomous Driving sensors, semiconductor technology that processes data quickly and with low power inside devices is emerging as a competitive edge. Korea's researchers have presented core technology for an integrated "sensing–compute–storage" artificial intelligence (AI) semiconductor to achieve this.

KAIST Electrical Engineering Professor Jeon Sang-hoon's team said on Dec. 31 that it presented six papers at the world-leading semiconductor conference IEEE IEDM 2025, held in San Francisco from Dec. 8 to 10. Among them, one study was also selected as a Highlight paper and a Top Ranked Student Paper.

As AI becomes smarter, the importance of semiconductor technology that processes data faster and with less power is also growing. In particular, in devices equipped with cameras and various sensors, the conventional architecture in which sensing (detection), compute (processing), and memory (storage) operate separately has caused power consumption and latency. That is because energy is wasted in the process of transferring captured information to another chip, storing it, and then retrieving it again for computation.

The core of the solution proposed by the KAIST team is an architecture that computes right where it sees and stores only the necessary information. Through the study selected as a Highlight paper, the team developed a "neuromorphic vision sensor" designed to handle in one chip what the human eye and brain do. By stacking a light-detecting sensor and a signal-processing circuit vertically within a single chip, it enables detection and decision-making to occur simultaneously.

Based on this, the researchers also announced six key technologies to improve AI Semiconductor end to end, from sensors to memory. They simultaneously built neuromorphic semiconductors that operate like the brain while using far less electricity using existing semiconductor processes, and next-generation memory optimized for AI.

In sensors, they changed the capture-and-send-for-computation flow so that the sensor filters out only key information and feeds it directly into processing. This reduces the amount of data that needs to be sent out, cutting power consumption and potentially speeding up response.

In memory, they focused on the low power and stable storage required for AI. The team implemented next-generation NAND flash that operates at lower voltages, endures long use, and reliably retains data even when power is off.

Jeon said, "It is significant in that we demonstrated the ability to integrate the entire flow into one system, moving away from the conventional approach of designing sensing, compute, and storage separately," adding, "We will expand this into a platform that can be widely applied, from ultra-low-power Edge AI to large-scale AI memory."

This study was conducted in collaboration with Samsung Electronics, Kyungpook National University, and Hanyang University.

※ This article has been translated by AI. Share your feedback here.