A US appeals court in Washington, DC, ruled on Wednesday that Anthropic had not met the standard to temporarily shake a Pentagon supply-chain-risk designation, extending a legal fight that had already split two federal courts and put the company’s access to government systems in doubt.
The three-judge appellate panel said granting a stay would force the United States military to keep dealing with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict, and said it did not want to risk a substantial judicial imposition on military operations or lightly override the military’s national-security judgments. Anthropic may suffer financial harm from the designation, the panel acknowledged, but it said that did not outweigh the government’s interest in keeping the restriction in place.
The Washington ruling conflicted with a lower-court decision in San Francisco last month, when a judge there found the Department of Defense likely acted in bad faith against Anthropic. That judge said the Pentagon was driven by frustration over Anthropic’s proposed limits on how its technology could be used and by the company’s public criticism of those restrictions, then ordered the supply-chain risk label removed last week. The Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government.
Read Also: Harmeet Dhillon Promotion Discussed for Top Justice Department Role
Anthropic said it remains confident the courts will ultimately agree that the supply-chain designations were unlawful, and spokesperson Danielle Cohen said the company had been recognized as needing these issues resolved quickly. Acting attorney general Todd Blanche cast the DC ruling as a win for the military, saying the government needs full access to Anthropic’s models if its technology is integrated into sensitive systems, and adding that military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.
Read Also: Claude Managed Agents launches in public beta with faster AI deployment
The case has become a test of how much power the executive branch has over the conduct of tech companies. Anthropic says it is the first US company to be designated under two different supply-chain laws that are usually used against foreign businesses seen as national-security risks, and it has argued in court that it was being punished for insisting that its AI tool Claude lacks the accuracy needed for certain sensitive operations, including deadly drone strikes without human supervision. The company also says the designation cost it business. With the Pentagon deploying AI in its war against Iran, the dispute now reaches beyond one company’s commercial losses to the broader question of who controls the use of advanced models inside the federal government.






