Google is paying attention to the technological advancements of Chinese artificial intelligence (AI) corporations, but stated that they have not introduced any innovations that did not previously exist. With the competition for AI supremacy intensifying, it also emphasized the need for a balanced regulatory approach that enhances security and safety without hindering technological progress.
Alice Friend, head of Google AI and emerging technology policy, said in a group interview held at the Google Korea office in the Finance Center in Gangnam, Seoul, on the 13th, "While Chinese AI corporations show creative approaches, they have not presented entirely new innovations to date." She noted that "in a situation where AI technology is rapidly advancing, governments must establish policies that secure security and reliability while promoting innovation."
Recently, major corporations in China, including DeepSeek and Manus, are rapidly expanding the market by openly sharing generative AI as open source, with Alibaba, Baidu, and ByteDance also participating. DeepSeek gained industry attention by recording high performance in global benchmark tests through its ultra-large language model (LLM) 'DeepSeek-MoE (Mixture of Experts Model).' However, concerns have been raised about the originality of the technology, as it shares a similar structure to OpenAI's GPT model.
Despite the rapid advancement of Chinese AI technology, concerns are growing about the adequacy of security and data protection standards. Eunice Huang, who is in charge of Google Asia Pacific's AI and emerging technology policy, said, "It is essential for governments to elevate security and privacy standards during the introduction of AI products," adding, "especially when AI products handle public services or sensitive data, stricter reviews are necessary." She further stated, "Regardless of where AI technology is developed, having reliable security standards is key."
During the interview, there was also an evaluation of the Korean AI Basic Law. Friend noted, "Rather than regulating AI technology itself, it is important to focus on how the technology is used and its outcomes," pointing out that "currently, parts of the AI Basic Law aim to regulate the technology itself."
He said, "While the bill emphasizes 'strengthening Korea's AI competitiveness' at the outset, it contains provisions that restrict technology." He also stated, "What was illegal without AI remains illegal in the AI era, and it is necessary to address AI within the existing regulatory framework."
Huang explained, "Currently, the bill defines various fields such as healthcare, transportation, and public services as 'high-impact AI,' but not all AI technologies carry the same level of risk." She added, "For instance, systems that manage patient reservations and those that diagnose cancer have completely different levels of risk. A more precise regulatory design is necessary."
Additionally, she mentioned that it should be made clear where the responsibility for compliance with AI-related regulations lies between AI model developers and organizations using AI. Huang said, "As AI technology is changing rapidly, there is a need for cooperation among the government, industry, and academia to establish more innovation-friendly regulations."
Friend stated regarding the direction of AI regulations, "Security and privacy protection are essential, but we must approach it cautiously in a way that does not hinder technological innovation," explaining, "Most countries are adopting a 'light touch' approach to regulation."
He continued, "Given the rapid pace of AI technological advancement, it is crucial to maintain a balance between security and innovation," adding, "Countries' governments must collaborate to support technological advancement while ensuring safe utilization of AI."