For autonomous driving technology to advance, the key is how accurately it can distinguish vehicles, pedestrians, motorcycles, and more on the road. But for artificial intelligence (AI) to learn this, it required labeling countless videos one by one, which took a lot of time and carried a heavy expense burden.
On the 17th, research team members Kwon Sun and Lee Jin-hee of the Future Mobility Research Department at Daegu Gyeongbuk Institute of Science and Technology (DGIST) said they developed an AI training technology called "MultipleTeachers" that can deliver state-of-the-art recognition performance even with almost no labels.
This technology groups similar objects to build multiple "teacher networks," which collaborate to automatically generate virtual labels. Even without people organizing the data one by one, the AI creates its own training data and learns from it.
The research team also added a feature called "PointGen" to reduce recognition errors that occur when LiDAR (laser range sensor) data is scarce. Thanks to this, major urban objects such as vehicles, pedestrians, and two-wheelers can be recognized far more precisely than before.
The research team said, "By training data with few labels together with unlabeled data, we greatly improved learning efficiency," and noted, "We presented a new AI training paradigm that enhances the safety of autonomous vehicles."
The research team also built a LiDAR-only dataset called "LiO," reflecting Korea's urban environment, together with FutureDrive, an autonomous driving startup founded at DGIST.
Based on data collected with one 128-channel LiDAR and six cameras, the dataset improved quality through expert verification conducted at least three times. It consists of 21,000 labeled videos and 96,000 unlabeled videos, making it usable in a variety of experimental environments.
On Waymo, a global autonomous driving dataset (label 1%), it recorded an mAP (object recognition accuracy) of 47.5; on KITTI (label 2%), 72.2; and on LiO Large (label 15%), 61.4. In all experiments, it outperformed existing state-of-the-art methods, and recognition accuracy for small objects, especially pedestrians and motorcycles, improved significantly.
This study was carried out with support from DGIST's institutional program and the research and development special zone promotion (R&D) program of the Ministry of Science and ICT, and the results will be officially presented at the international conference ICCV 2025 in Oct.
Dr. Lee said, "It is an honor to present DGIST's perception technology at ICCV 2025, the world's top vision conference," and added, "We will open the LiO dataset to share knowledge with the research community and expand technology adoption into various fields such as autonomous driving, smart cities, and logistics robotics."