A new campaign called Stop Sloppypasta launched at stopsloppypasta.ai to address a growing problem in professional communication: the sharing of raw, unverified AI-generated content that places the full burden of verification on recipients. The movement defines "sloppypasta" as sharing AI output that delegates verification responsibility to people who didn't request it, creating a fundamental imbalance in professional and online communication.
Movement Identifies Core Problem With Unverified AI Content Sharing
The campaign highlights that LLMs have made text production effectively free in terms of effort, while reading and comprehending text still requires the same cognitive investment it always has. This asymmetry becomes problematic when people share AI-generated summaries, reports, or analyses without verification. The shared output "obfuscates the chain of trust" — recipients don't know what has been verified, what can be trusted, or which parts might contain hallucinations or errors. This uncertainty forces recipients to either verify everything themselves or accept information with unknown reliability.
Stop Sloppypasta Provides Four Key Guidelines
The movement outlines practical guidelines for sharing AI-generated content:
- If you've verified and edited AI output, send it as your own work with a note about AI assistance
- If sharing raw output, explicitly state that it's unverified AI content
- Only share AI output when specifically requested
- When sharing AI output is necessary, provide it as a link or attachment rather than inline text
These guidelines focus on communication efficiency and trust preservation rather than moral judgment about AI use. The goal is to maintain clear accountability and prevent the erosion of professional communication standards.
Hacker News Community Shows Strong Resonance With Problem
The Stop Sloppypasta submission reached 444 points with 181 comments on Hacker News as of March 15, 2026, indicating significant community engagement with the issue. Discussion threads revealed that many professionals have experienced frustration with unsolicited AI-generated content in emails, Slack messages, and documentation. The timing is significant as AI writing assistants became ubiquitous in 2026, causing an explosion in the volume of AI-generated text in professional settings.
Campaign Connects to Broader 2026 AI Content Quality Debates
The movement connects to wider 2026 discussions about "AI slop" — low-quality AI-generated content flooding various platforms. Microsoft CEO Satya Nadella notably asked people to stop using the term "slop" for AI content in early 2026, highlighting how contentious AI content quality has become. Stop Sloppypasta addresses this issue from a practical communication perspective, focusing on establishing etiquette standards as AI tools become standard in workplace environments.
Key Takeaways
- The Stop Sloppypasta movement launched at stopsloppypasta.ai to address unverified AI-generated content sharing in professional communication
- The campaign defines sloppypasta as sharing AI output that delegates verification responsibility to unwitting recipients who didn't request it
- The movement provides four practical guidelines including explicitly labeling unverified content and sharing AI output as attachments when necessary
- The Hacker News submission reached 444 points with 181 comments as of March 15, 2026, showing strong community resonance
- The timing is significant as AI writing assistants became ubiquitous in 2026, creating an explosion in AI-generated professional communication