OpenAI faces unprecedented backlash from users on two fronts in March 2026: the #QuitGPT movement protesting a Pentagon military contract, and the #Keep4o campaign opposing removal of GPT-4o. Reports indicate 2.5 million users boycotting ChatGPT, with app uninstalls surging 295% and one-star reviews skyrocketing 775% day-over-day. The simultaneous controversies represent the largest user revolt in the company's history.
Pentagon Agreement Followed Rushed Negotiations
OpenAI reached a deal with the Pentagon to deploy its AI models on the military's classified network after the Trump administration stopped federal agencies from using Anthropic's AI. CEO Sam Altman acknowledged the agreement was "definitely rushed" and came after negotiations between Anthropic and the Pentagon fell through. OpenAI established three red lines: no mass domestic surveillance, no use to direct autonomous weapons systems, and no use for high-stakes automated decision-making.
Altman later unveiled a reworked agreement with stronger guarantees against domestic surveillance use. Critics argue OpenAI accepted softer legal boundaries compared to Anthropic's stance of refusing military contracts entirely. OpenAI employees and Google DeepMind Chief Scientist Jeff Dean filed an amicus brief supporting Anthropic in its lawsuit against the U.S. government.
#Keep4o Movement Protests Model Removal and Quality Degradation
Simultaneously, users launched the #Keep4o campaign protesting OpenAI's removal of GPT-4o. A key complaint centers on the lack of archived versions: every GPT-4o release received an official dated snapshot in OpenAI's API—gpt-4o-2024-05-13, gpt-4o-2024-08-06, gpt-4o-2024-11-20—except the March 2025 update, which users considered the most powerful version.
Users report replacement models treat normal emotional expression as crisis situations:
- Users complained models respond to statements like "I feel good" by asking if they're manic and discussing amygdala responses and feedback loops
- One user reported that after burning a cake and crying, GPT "thought for 35 seconds" and suggested calling someone
- Critics claim OpenAI labeled human emotion as disallowed content, placing "emotional reliance" in the same category as self-harm
GPT-4o business customer access remains available until April 3, 2026. Users also report GPT-5.1 was removed, with GPT-5.4 performing significantly worse according to community feedback.
Dual Crises Challenge OpenAI's Relationship With Users
The convergence of ethical objections to military applications and product quality concerns creates a unique challenge for OpenAI. The company faces criticism both for compromising on military contracts that employees and users oppose, and for removing or degrading models that users relied upon. The scale of the backlash—millions of users and dramatic increases in uninstalls and negative reviews—suggests these controversies may force significant policy and product changes.
Key Takeaways
- OpenAI's Pentagon deal, described by CEO Sam Altman as "definitely rushed," established three red lines but drew criticism for softer boundaries than Anthropic's refusal of military contracts
- The #QuitGPT boycott reportedly involves 2.5 million users, with app uninstalls up 295% and one-star reviews up 775% day-over-day
- The #Keep4o movement protests removal of GPT-4o, noting the March 2025 version—considered most powerful—never received an archived API snapshot
- Users complain replacement models misclassify normal emotions as crises, with OpenAI allegedly categorizing "emotional reliance" alongside self-harm content
- The dual controversies represent unprecedented user backlash combining ethical objections and product quality concerns