Anthropic, the "OpenAI rival" that has emphasized artificial intelligence (AI) safety, has moved to relax its safety stance. The reason is concern that sticking to a high level of safety could cause it to fall behind in the competition.
On the 24th (local time), Anthropic said in "Responsible Scaling Policy 3.0," released on its website, that it would ease key safety measures. Previously, if Anthropic's AI model could be classified as potentially dangerous, it halted development, but it will no longer apply a development pause if a competitor releases a similar or superior model first.
The Wall Street Journal (WSJ) said this was a major change in direction compared with the policy released in Sep. 2023. The model development and testing guidelines Anthropic released at the time positioned the company as one of the most safety-focused in the industry among corporations.
Anthropic is competing not only with U.S. rivals such as OpenAI, Google, and xAI of Tesla Chief Executive Officer Elon Musk, but also with Chinese AI corporations. AI corporations have been unveiling cutting-edge AI models one after another this year.
Recently, it has been in conflict with the U.S. Department of Defense over the scope of military use of its AI model Claude. U.S. Defense Secretary Pete Hegseth notified that if Anthropic does not agree to the Department of Defense's demands by the 27th, it could lose its Pentagon contracts or face disadvantages such as being designated a "supply chain risk" company or application of the Defense Production Act (DPA).
Anthropic said the safety policy change reflects the pace of AI progress and the lack of federal AI regulation. The company said, "Even though AI capabilities have advanced rapidly over the past three years, the government response on AI safety has been slow," and added, "The policy environment has shifted to prioritize AI competitiveness and economic growth, and safety-centered discussions have yet to make meaningful progress at the federal level." It also emphasized that it would maintain industry-leading safety standards.
Anthropic added that the decision is unrelated to negotiations with the U.S. Department of Defense.