More than 30 employees from OpenAI and Google DeepMind filed a statement Monday, March 9-10, 2026, supporting Anthropic's position after the U.S. Department of Defense labeled the AI firm a supply-chain risk. The Pentagon designation came after Anthropic refused to allow the DoD to use its technology for mass surveillance of Americans or autonomously firing weapons. The amicus brief includes Google chief scientist Jeff Dean and employees from both OpenAI and Google DeepMind—Anthropic's direct competitors.
Cross-Company Support Signals Concern About Government Overreach
The unprecedented cross-company support warns that a Pentagon blacklist of Anthropic threatens to damage the entire American AI industry, not just one company. The implication is that if the government can punish companies for ethical red lines, it creates a chilling effect on responsible AI development. Anthropic CEO Dario Amodei has been vocal about ethical boundaries, refusing to allow mass surveillance of American citizens, autonomous weapons systems that fire without human oversight, and other military applications contrary to their AI safety mission.
The optics of defense deals sparked public conflict between company CEOs. Dario Amodei called OpenAI's approach to Pentagon work "safety theater" and described Sam Altman's public statements as "straight up lies," representing an unusually harsh exchange between major AI leaders.
Google Expands Pentagon Work While Competitors Face Controversy
While OpenAI and Anthropic publicly spar over Pentagon conditions, Google is quietly expanding its Pentagon work and growing users faster than rivals. Google is set to provide AI agents to the Pentagon's 3-million-person workforce for unclassified work—a more measured approach that avoids the controversies facing its competitors. This strategy allows Google to maintain defense relationships while steering clear of the autonomous weapons and surveillance debates.
Anthropic Gains Market Share Despite Government Pressure
Despite the Pentagon's supply-chain risk designation, Anthropic is winning market share. According to the Ramp March 2026 AI Index, among companies purchasing AI services for the first time, Anthropic now wins approximately 70% of head-to-head matchups against OpenAI. The company demonstrated continued business momentum by investing $100 million into the Claude Partner Network on March 12, 2026, and announcing the Anthropic Institute on March 11, 2026.
Fundamental Tension Between National Security and AI Safety
This case represents a fundamental tension in AI development: national security interests demanding AI capabilities for defense, AI safety advocates concerned about autonomous weapons and surveillance, and commercial interests navigating these competing pressures. The Pentagon's willingness to label a major AI company as a "supply-chain risk" for ethical objections sets a concerning precedent for the relationship between government and the AI industry.
Key Takeaways
- More than 30 OpenAI and Google DeepMind employees, including Google chief scientist Jeff Dean, filed statements supporting Anthropic against Pentagon supply-chain risk designation
- Pentagon labeled Anthropic a risk after the company refused to allow mass surveillance of Americans or autonomous weapons firing without human oversight
- Anthropic wins approximately 70% of head-to-head matchups against OpenAI among first-time AI service purchasers according to Ramp March 2026 AI Index
- Google is expanding Pentagon work by providing AI agents to 3-million-person workforce for unclassified work, avoiding autonomous weapons controversy
- Anthropic invested $100 million in Claude Partner Network and announced Anthropic Institute in March 2026 despite government pressure