Microsoft’s decision to file an amicus brief in support of Anthropic in its battle against the Pentagon is being interpreted as a wake-up call for the Defense Department about the consequences of attempting to govern AI through coercion rather than collaboration. The brief was filed in a San Francisco federal court and called for a temporary restraining order against the Pentagon’s supply-chain risk designation. Amazon, Google, Apple, and OpenAI have also filed in support of Anthropic, making the industry’s collective message to the Pentagon unmistakable.
The Pentagon’s designation of Anthropic as a supply-chain risk came after the company refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth formalized the designation, and Anthropic’s government contracts began to be cancelled. The company filed two simultaneous lawsuits challenging the designation in California and Washington DC.
Microsoft’s wake-up call to the Pentagon is informed by its own deep integration of Anthropic’s technology into federal military systems and its participation in the $9 billion Joint Warfighting Cloud Capability contract. The company also holds additional agreements with government agencies spanning defense, intelligence, and civilian services. Microsoft publicly called for a path forward in which the government and technology sector collaborate to ensure AI advances national security without crossing ethical lines.
Anthropic’s lawsuits argued that the supply-chain risk designation was an unconstitutional act of ideological retaliation for the company’s publicly held AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. Anthropic noted that this designation had never before been applied to a US company, underscoring its unprecedented nature.
Congressional Democrats have separately written to the Pentagon asking whether AI was used in a strike in Iran that reportedly killed over 175 civilians at an elementary school. Their formal inquiries ask about AI targeting systems and the degree of human oversight applied. Together, Microsoft’s wake-up call, the industry coalition, and congressional pressure are signaling to the Pentagon that its approach to AI governance must change if it wants to maintain access to the best commercial AI technology.
Microsoft’s Amicus Filing for Anthropic Is a Wake-Up Call for the Pentagon on AI Governance
1