Anthropic filed two federal lawsuits against the Trump administration on March 9, 2026, alleging illegal retaliation after Pentagon officials designated the company a supply chain risk. The legal action follows Anthropic's refusal to remove restrictions preventing Claude AI from being used for mass surveillance and autonomous weapons.
Pentagon Designates Anthropic as Supply Chain Risk
The conflict originated when negotiations to update Anthropic's Pentagon contract broke down over two key restrictions the company wanted to maintain: prohibiting Claude AI from being used for mass surveillance of U.S. citizens and preventing its use in autonomous weapons systems. The Pentagon rejected these restrictions, stating it needs to use AI tools for all lawful purposes and cannot allow private companies to dictate usage during national security emergencies.
In response, Pentagon officials designated Anthropic as a supply chain risk, an extraordinary classification historically reserved for foreign adversaries. The designation requires Defense contractors to certify they do not use Claude models in Pentagon work, potentially jeopardizing hundreds of millions of dollars in revenue for Anthropic.
Legal Claims Challenge First Amendment Violations
Anthropic filed lawsuits in the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C. The company alleges violations of First Amendment rights, claims the Pentagon exceeded the scope of supply chain risk law, and asserts the designation constitutes illegal retaliation for Anthropic's public stance on AI safety.
Context of Broader AI Safety Tensions
The lawsuit comes amid growing tensions over military AI applications. Caitlin Kalinowski, OpenAI's former robotics head, recently resigned citing concerns about OpenAI's Pentagon contract involving surveillance and autonomous weapons. The Anthropic case represents a direct legal challenge to Pentagon authority over AI deployment restrictions, particularly regarding DoD Directive 3000.09 governing autonomous weapons systems.
Key Takeaways
- Anthropic filed two federal lawsuits on March 9, 2026, challenging the Pentagon's designation of the company as a supply chain risk
- The designation came after Anthropic refused to remove restrictions preventing Claude AI use for mass surveillance and autonomous weapons
- The supply chain risk label could jeopardize hundreds of millions of dollars in revenue for Anthropic
- Legal claims include First Amendment violations and illegal retaliation for the company's AI safety positions
- The case follows recent resignations at OpenAI over similar concerns about military AI applications