Anthropic has filed lawsuits in federal court challenging the Pentagon's designation of the company as a supply chain risk, alleging violations of First Amendment rights and statutory overreach. The legal action follows the collapse of negotiations over a $200 million Defense Department contract, with Anthropic refusing to allow its AI systems to be used for mass surveillance of U.S. citizens or autonomous weapons.
Contract Breakdown Over Red Lines
Anthropic signed a $200 million contract with the Department of Defense in July 2025 and became the first AI lab to deploy its technology across the agency's classified networks. Negotiations to update the contract broke down over two specific conditions Anthropic demanded: prohibition of mass surveillance of U.S. citizens and prohibition of autonomous weapons use.
The Pentagon rejected these limitations, stating it could not allow a private company to dictate how military tools can be used during national security emergencies. The Department of Defense insists on using AI tools for all lawful purposes without contractual restrictions.
Dual Lawsuits Challenge Federal Action
Anthropic filed lawsuits in the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C. The complaints allege the Trump administration violated the company's First Amendment rights and exceeded the scope of supply chain risk law.
The filing states: "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here." Anthropic's complaint warns the actions could jeopardize hundreds of millions of dollars in revenue.
Industry Support and Competitive Fallout
Dozens of scientists and researchers at OpenAI and Google DeepMind filed an amicus brief supporting Anthropic in their personal capacities. Google chief scientist Jeff Dean was among the signatories, demonstrating unusual cross-company solidarity on AI safety principles.
Hours after Anthropic-Pentagon negotiations collapsed, OpenAI struck its own deal with the Department of Defense, apparently agreeing to provide models without the contractual limitations Anthropic demanded. This sparked the #QuitGPT movement, which attracted over 2.5 million supporters, and ChatGPT uninstalls surged 295% overnight.
CEO Calls OpenAI Approach 'Safety Theater'
Anthropic CEO Dario Amodei publicly criticized OpenAI's Pentagon deal, calling the company's approach "safety theater" and describing OpenAI CEO Sam Altman's public statements as "straight up lies." The dispute highlights deep divisions within the AI industry over acceptable military applications.
Patrick Moorhead, CEO of Moor Insights & Strategy, told Axios: "OpenAI looked opportunistic. Anthropic got blacklisted. Google gained the most ground and nobody's talking about it." Google is now set to provide AI agents to the Pentagon's 3-million-person workforce for unclassified work, positioning itself as a beneficiary of the Anthropic-OpenAI controversy.
Key Takeaways
- Anthropic filed federal lawsuits challenging the Pentagon's supply chain risk designation after refusing to allow mass surveillance and autonomous weapons use
- The company had a $200 million Defense Department contract and was the first AI lab deployed on classified networks before the designation
- OpenAI signed a Pentagon deal hours after Anthropic negotiations collapsed, sparking a #QuitGPT movement with 2.5 million supporters and 295% surge in uninstalls
- Scientists from OpenAI and Google DeepMind, including Google chief scientist Jeff Dean, filed amicus briefs supporting Anthropic
- Google emerged as a quiet winner, securing contracts to provide AI agents to the Pentagon's 3-million-person workforce for unclassified work