Mark Zuckerberg, Meta CEO./Courtesy of News1

Unlike the open policy of the Chinese artificial intelligence (AI) startup DeepSeek, Meta has stated that it will not release AI systems that pose security risks, drawing attention. The IT industry analyzed that Meta is trying to differentiate itself in response to security concerns in the United States and Europe regarding DeepSeek.

In a policy document titled 'Frontier AI Framework' announced on the 3rd (local time), Meta noted that for some high-performance AI systems, there is a possibility that they will not be released if dangerous technologies are included. Meta explained that this evaluation is based on opinions from both internal and external researchers, assessed by high-level decision-makers.

◇ "We will not release AI that creates harmful corporations or lethal weapons."

Meta has identified two types of AI systems it considers dangerous to release: ▲ high-risk systems and ▲ lethal risk systems. According to the document, these refer to systems capable of supporting cyber attacks, chemical, and biological attacks. Meta cited technologies that automatically breach the highest-level security environments of corporations or promote the spread of lethal biological weapons as examples.

Initially, Meta showed a tolerant attitude toward disclosing its technology information compared to other major American tech corporations. Meta is classified as a latecomer in the AI industry. CEO Mark Zuckerberg partially disclosed the open-source LLaMA, contrasting with the closed model of OpenAI's ChatGPT. He also emphasized the openness of AI, mentioning that he would eventually reveal Artificial General Intelligence (AGI).

Industry observers have noted that Meta is adjusting its progressive stance on disclosing AI information with a focus on DeepSeek. DeepSeek has released its AI models as open-source for anyone to analyze. The attention on DeepSeek's open-source model has prompted movement within the major American tech industry. Sam Altman, CEO of OpenAI, noted on the 31st of last month that he is internally discussing the possibility of publicly disclosing some technologies related to AI models and increasing research results.

◇ Meta, which partially disclosed the open-source 'LLaMA'... Is aware of DeepSeek?

Meta appears to be conscious of security issues, perceived as a weakness of open-source initiatives. Government agencies in various countries, including the U.S., Europe, and Japan, have banned the use of DeepSeek, citing the risk of information leakage and security vulnerabilities. IT media outlet TechCrunch explained, 'Meta's AI model LLaMA has recorded hundreds of millions of downloads and achieved success. However, reports indicate that at least one adversarial entity in the U.S. has utilized LLaMA to develop defense-related chatbots.'

TechCrunch also reported, 'DeepSeek also provides AI systems publicly, but there are concerns that it lacks security measures, making it easy to generate harmful or dangerous content.'

Through the policy document, Meta stated, 'We share our current approach to responsibly developing advanced AI systems,' adding, 'We hope to provide insights into our decision-making process and facilitate discussions and research on improving AI evaluation methods that consider both risks and benefits.'

However, Meta hinted that it would maintain its open-source disclosure policy. In the document, Meta stated, 'Our open-source approach allows for independent evaluations of model capabilities by a broader community,' asserting that it helps to predict and mitigate risks more effectively.

※ This article has been translated by AI. Share your feedback here.