The National Intelligence Service on the 5th jointly issued "AI supply chain risks and mitigations" with cyber defense agencies from seven countries, including the U.S. National Security Agency (NSA), the Canadian Centre for Cyber Security (CCCS), the Cyber Security Agency of Singapore (CSA), the New Zealand National Cyber Security Centre (NCSC-NZ), Japan's National Center of Incident Readiness and Strategy for Cybersecurity (NCO), the U.K. National Cyber Security Centre (NCSC), and Australia's Australian Signals Directorate (ASD).
The National Intelligence Service (NIS) said the advisory was prepared to respond to security threats stemming from the complexity of the AI supply chain. It reflected that in a structure involving various suppliers—such as models, data, and infrastructure—risk factors like "backdoor hiding" can increase.
The advisory defines AI as a system that must embed security from the design stage, not as an asset to be managed only after deployment. It presents risk factors and mitigation measures for five categories: data, Machine Learning models, software, infrastructure and hardware, and third-party services.
According to the advisory, low-quality or biased data can cause judgment errors, so data from reliable sources must be used, and Machine Learning models should apply safe file formats and transparent models in consideration of the possibility of hiding malware or inserting backdoors.
It also noted that because new security threats such as malicious firmware injection exist for AI infrastructure, it should be managed by applying existing information security principles such as network separation and in-house authentication.
The National Intelligence Service (NIS) jointly issued "Safe AI development guidelines" with the United States and the United Kingdom in Nov. 2023, and distributed the "AI security guidebook" in Dec. 2024.
An official at the National Intelligence Service (NIS) said, "This advisory organizes AI-specific risks from a supply chain perspective and suggests a preventive, proactive direction for security management," adding, "We will actively support safe use of AI in Korea in cooperation with major countries."