Illustration = ChatGPT

Just after the Ministry of National Defense labeled AI corporations Anthropic a "supply chain risk" and ordered federal agencies to stop using it, rival OpenAI signed a deal to provide its services to the Pentagon's classified network. A Silicon Valley internal clash over the acceptable scope of military AI has collided head-on with Washington's security policy.

According to foreign media on the 27th (local time), OpenAI CEO Sam Altman shared the news on X (formerly Twitter) the same day, saying that "a ban on mass surveillance" and "human accountability for the use of force" are core safety principles. Altman said the Ministry of National Defense also agreed to these principles and technical safeguards. OpenAI's contract with the Ministry of National Defense is to deploy AI models on a classified cloud network.

The announcement came just hours after Anthropic effectively began the process of being pushed out of government. Earlier, U.S. President Donald Trump directed federal agencies to stop using Anthropic's AI model Claude, and Secretary of Defense Pete Hegseth designated Anthropic a "supply chain risk," notifying agencies to transition services to other providers within six months.

The spark was Anthropic's refusal to comply with the Ministry of National Defense's demands. Anthropic rejected lifting restrictions, saying it opposes Claude being used unconditionally for mass surveillance in the United States and fully autonomous lethal weapons without human involvement. In response, the Ministry of National Defense ratcheted up pressure, even raising the possibility of invoking the Cold War–era Defense Production Act (DPA), and Anthropic said it would pursue legal action.

The repercussions spread across Silicon Valley. Some employees of Amazon, Google, Microsoft, and OpenAI, along with a coalition of labor groups, issued an open letter urging their management to stand in solidarity with Anthropic. They called on them to reject the Ministry of National Defense's demand for unlimited use.

Elon Musk, CEO of xAI, which recently received approval from the Ministry of National Defense to use its systems for classified work, wrote on X that "Anthropic hates Western civilization," aligning with the Trump administration. Musk has previously mocked Anthropic as "Misanthropic," continuing his public criticism.

Anthropic CEO Dario Amodei, formerly of OpenAI, has long emphasized a safety-first stance. By contrast, since 2024 OpenAI has publicly broadened the framework for national security cooperation and expanded its footprint in military-related projects, including work with Anduril.

Industry observers say competition over security and ethical standards is intensifying as both companies move to monetize with initial public offerings in mind. The military and the opposition are also urging caution. Jack Shanahan, a retired general who was the first director of the U.S. Department of Defense's Joint Artificial Intelligence Center (JAIC), said, "No large language model (LLM) in its current form should be used in fully lethal autonomous weapon systems," and House Minority Leader Hakeem Jeffries of the Democratic Party said, "Mass surveillance of American citizens is unacceptable," backing Anthropic's decision.

※ This article has been translated by AI. Share your feedback here.