Microsoft has urged federal judges to exercise restraint and issue a temporary order halting the effects of the Pentagon’s supply-chain risk designation against Anthropic, warning that the designation could cause irreparable damage to the technology networks supporting national defense. The amicus brief was filed in a San Francisco federal court and was accompanied by a separate joint filing from Amazon, Google, Apple, and OpenAI. The combined weight of these filings reflects the unprecedented unity of the technology industry in opposing the Pentagon’s action.
Anthropic’s legal battle was sparked by the Pentagon’s decision to label it a supply-chain risk after the company refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons as part of a $200 million contract negotiation. Defense Secretary Pete Hegseth formalized the designation following the breakdown of talks, and Anthropic’s existing government contracts began to be cancelled. The company filed two lawsuits on the same day in California and Washington DC.
Microsoft’s filing is grounded in its direct integration of Anthropic’s AI into military systems it provides to the federal government and its status as a partner in the Pentagon’s $9 billion cloud computing contract. The company also holds numerous additional agreements with government agencies. Microsoft publicly argued that the country needed a path forward that allowed both reliable access to the best AI technology and effective safeguards against its misuse for surveillance or unauthorized warfare.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s public stance on AI safety. The company stated that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. Anthropic also noted that no US company had ever previously received this designation, underscoring its unprecedented nature.
Congressional Democrats are simultaneously pressing the Pentagon for information about whether AI was used in a strike in Iran that reportedly killed more than 175 civilians at a school. Their formal letters ask about AI targeting tools and human oversight processes. The legal and legislative pressure now building against the Pentagon is creating a watershed moment for public accountability in the use of artificial intelligence in American military operations.