A U.S. federal court blocked the Donald Trump administration's move to designate the artificial intelligence (AI) corporations Anthropic as a "supply chain risk" corporations.
Judge Rita Lin of the U.S. District Court for the Northern District of California granted Anthropic's request for a temporary restraining order on the 26th (local time), issuing a preliminary injunction that halts the measure's effect until a ruling in the main lawsuit.
The court said the Ministry of National Defense (War Department) designating Anthropic as a supply chain risk corporations "does not appear to be directly related to the national security concerns the government has articulated." It added, "If the Ministry of National Defense is concerned about the integrity of its command-and-control systems, it could simply stop using Anthropic's AI model Claude, but this action supports the inference that Anthropic is being punished for criticizing the government's contracting stance," emphasizing that it violates the First Amendment's free speech protections.
It went on to criticize, "There is no statute that supports the Orwellian notion that a U.S. corporations can be branded as a potential adversary and destroyer of the United States merely for expressing opposition to the government."
Citing a brief that described the Ministry of National Defense's action as an "attempted corporations killing," the court explained its reason for issuing the temporary restraining order by saying, "Even if not a killing, the evidence indicates it will cause serious harm to Anthropic."
It also noted that within days of the Ministry of National Defense's announcement, Anthropic suffered revenue hits, including delays in signing contracts worth hundreds of millions of dollars, some customers canceling contracts, and negotiations with potential customers being suspended.
In particular, it pointed out that the Ministry of National Defense's measure barring private corporations from transacting with Anthropic lacked legal grounds.
The court also prohibited following or enforcing President Trump's social media directives instructing federal agencies other than the Ministry of National Defense to stop using Anthropic's technology.
It further ordered federal agencies, including the Ministry of National Defense, to report by the 6th of next month on the steps taken to comply with this order.
However, it stayed the effect of the order for seven days to allow the administration an opportunity to appeal.
Anthropic's Claude was the only AI used in the U.S. military's classified systems, but Anthropic clashed with the Ministry of National Defense by arguing that its AI models should not be used for large-scale domestic surveillance and autonomous lethal weapons. After insisting that AI must be available without restriction for "all lawful purposes," the Ministry of National Defense on the 27th of last month took the hard-line step of designating Anthropic as a "supply chain risk" corporations, and President Trump also instructed all federal agencies to stop using Anthropic's technology.
In response, Anthropic filed a lawsuit and a motion for an injunction on the 9th seeking to cancel the designation against the Trump administration.
Even after the United States released Anthropic's ouster, it was reported to have used Claude in the strikes on Iran. At the time, to hit about 1,000 targets during the first 24 hours of the airstrikes, the U.S. military used the AI-based military intelligence platform "Maven Smart System" developed by Palantir, which was equipped with Anthropic's Claude.