The U.S. Ministry of National Defense recently designates the artificial intelligence (AI) corporations Anthropic as a "supply chain risk" corporations. /Courtesy of Yonhap News Agency

Anthropic, the artificial intelligence (AI) company designated by the U.S. Ministry of National Defense (Pentagon) as a "supply-chain risk" company, filed a lawsuit against the Donald Trump administration seeking to overturn the designation.

According to Bloomberg News and other major foreign media, on the 9th (local time) Anthropic filed a lawsuit in the U.S. District Court for the Northern District of California against 18 federal agencies including the Ministry of National Defense and senior administration officials such as Minister of National Defense Pete Hegseth, asking the court to declare the supply-chain risk designation against the company unlawful and to stay its effect.

In the complaint, Anthropic called the Ministry of National Defense's move to designate the U.S. company as a supply-chain risk company "an unprecedented illegal act," arguing that "the Constitution does not allow the government to wield vast power to punish corporations because the corporations made statements that represented and protected their own positions."

It added that "a supply-chain risk designation is permissible only when necessary to protect against the risk that hostile nations will destroy or subvert information systems for national security purposes, but Anthropic does not fall under that," noting that, in fact, Anthropic is the first U.S. company to be designated a "supply-chain risk" company.

It also emphasized that the government's move to designate Anthropic as a supply-chain risk company while simultaneously allowing it to continue providing services for six months, and the Ministry of National Defense's one-time threat to invoke the Defense Production Act to forcibly requisition Anthropic's technology, contradict the claim that Anthropic poses a security threat.

They said, "The Ministry of National Defense can terminate its contract with Anthropic and procure services from another AI company," but added, "This unnecessary and extremely punitive measure is a textbook case of unconstitutional retaliation."

They also cited as evidence of retaliation President Trump's earlier remark in a media interview that "I fired them like a dog because they (Anthropic) shouldn't have done that."

Anthropic included other federal agencies such as the General Services Administration (GSA) on the defendant list, in addition to the Ministry of National Defense, because these agencies cut ties following President Trump's directive.

Anthropic's AI model Claude was the only AI used in the U.S. military's classified systems, but the company clashed with the Ministry of National Defense as it argued its AI models should not be used for large-scale domestic surveillance and autonomous lethal weapons. The Ministry of National Defense insisted AI must be available without restriction for "all lawful uses," and on the 27th took the drastic step of designating Anthropic a "supply-chain risk" company, while President Trump ordered all federal agencies to stop using Anthropic's technology.

※ This article has been translated by AI. Share your feedback here.