Jaguar Land Rover (JLR), a British carmaker, suffered a massive cyberattack at the end of August last year that halted factory operations for more than a month. Jaguar, which had produced about 5,000 vehicles a week, shut its plants for about five weeks, cutting off orders to more than 5,000 partners and hitting the entire U.K. auto manufacturing supply chain.
The incident is considered the largest cyber breach in U.K. history. In the wake of the hacking, vehicle sales at Jaguar, the U.K.'s largest car manufacturer, fell 25%, and the U.K. economy suffered at least £1.9 billion (about 3.75 trillion won) in damage, according to estimates by the U.K. nonprofit Cyber Monitoring Center (CMC).
Kim Sang-woo, a partner at EY Hanyoung and leader of cyber security consulting, said at a webinar hosted by Fortinet Korea on the 10th titled "Security threats and response strategies in physical AI environments" that "as the adoption of artificial intelligence (AI) accelerates, security threats across manufacturing are surging," diagnosing that, as seen in Jaguar's case, cyberattacks in manufacturing have emerged as a risk factor that affects everything from production line shutdowns to the revenues of corporations and even national supply chains.
In particular, in physical AI environments represented by robots, autonomous vehicles, and smart factories, the risk is greater because security incidents can go beyond simple data leaks to cause robot malfunctions, manipulation of autonomous vehicles, production stoppages, and, in the worst cases, loss of life. Fortinet projected that in physical AI environments, the attack surface will grow exponentially—from AI models to robots, drones, autonomous vehicles, and AI agents—significantly increasing the types of cyberattacks that must be addressed.
Moon-gwi, an executive vice president at Fortinet Korea, said, "In the past, it was enough to focus on securing IT infrastructure, but now we must protect physical environments to prevent catastrophic damage," adding, "With advances in AI, attackers carefully select targets and carry out precision strikes, so AI-based defense-in-depth is no longer optional—it has become a prerequisite that corporations must have."
Fortinet cited the following cyberattacks that corporations should watch for in the physical AI era: "model poisoning," which tampers with training data to induce malfunctions; "unauthorized access," in which hackers directly or indirectly obtain AI model privileges; "malicious prompts," which are command-based attacks; "shadow AI," systems not approved by the organization; and "data exfiltration," in which sensitive data is leaked externally without authorization.
Moon said, "Threats targeting new attack surfaces are becoming reality, such as cases where Humanoid Robot introduced on actual factory floors transmitted data without authorization, or attempts to seize vehicle control through backdoors (unauthorized data exfiltration channels) inside AI embedded in Autonomous Driving vehicles."
Notably, researchers at Alias Robotics, a robot cybersecurity company, discovered in September last year that G1, a Humanoid Robot from Chinese robotics corporation Unitree, was transmitting data to servers in China every five minutes. The researchers warned that the backdoor vulnerability found in G1 could be exploited for hacking, data leaks, and robot malfunctions, noting that it "could cause damage beyond imagination."
In February, the Georgia Institute of Technology in the United States said it had discovered VillainNet, a backdoor inserted into the AI network of an Autonomous Driving vehicle. Moon said, "The backdoor lies dormant under normal conditions and activates when specific conditions are met—such as 'changes in road conditions on a rainy day'—and has been shown to seize vehicle control with a 99% success rate."
He emphasized that to respond effectively to increasingly sophisticated cyberattacks powered by AI, it is essential to adopt an AI-based layered security architecture that protects the entire process, from corporate networks to AI runtime environments.
Partner Kim Sang-woo also said, "The decisive battleground for manufacturing security is to view AI and the factory's control systems (OT) as one and build a security framework that fits," adding, "In the AI era, rather than trying to completely block cyberattacks, we should regard them as inevitable risks and make rapidly building resilience the top priority."