On-device artificial intelligence (AI) semiconductor corporations DeepX said on the 8th it released the video intelligence–dedicated chipset "DX-H1 V-NPU," which handles video AI analysis on the scale of hundreds of channels at a low power level of 30 watts (W).
The product reduces the power, expense, and complexity of large-scale video AI processing by integrating into a single card the video input, compression, and AI inference processes that had been split between graphics processing unit (GPU) servers and separate codec equipment.
DeepX emphasized that the DX-H1 V-NPU will be a turning point that changes the basic unit of video AI infrastructure from GPUs to V-NPUs (Neural Processing Unit (NPU)).
The product maintains 24/7 real-time inference performance while cutting about 80% of hardware expense and about 85% of power expense compared with GPUs for the same number of channels.
It is not merely a "low-expense substitute" for GPUs, the company said, but a product that changes the core design philosophy of video intelligence infrastructure, in which AI understands video.
While GPUs are strong in general-purpose computing, they are not architectures optimized for multichannel video I/O and real-time streaming.
A DeepX official said, "Large-scale video AI will no longer run by borrowing spare capacity from general-purpose GPU computing but will run on a dedicated chipset," and predicted it will take hold as its own industry sector, including smart cities, traffic control centers, and national infrastructure.
The DX-H1 V-NPU received an Innovation Award at CES 2026, the world's largest technology trade show held in Las Vegas next January, and will be officially unveiled at the event.