Mirage, a unified virtual filesystem for AI agents created by strukto-ai, launched on May 6, 2026 and gained 995 GitHub stars within 36 hours. The open-source infrastructure project solves a fundamental integration challenge: instead of requiring agents to learn dozens of different APIs and Model Context Protocols, Mirage presents every backend as a single unified filesystem that any LLM familiar with bash can use immediately.
Bash-Native Interface Eliminates API Learning Curve
Mirage's core innovation is mounting disparate services side-by-side under a single filesystem root. Agents can execute familiar bash commands across different backends—running operations like grep alert /slack/general/*.json | wc -l to search Slack channels or cp /s3/report.csv /data/local.csv to move files between S3 and local storage. This approach requires zero new vocabulary for LLMs already trained on bash commands.
The system currently supports RAM, Disk, Redis, S3/R2/OCI/Supabase/GCS, Gmail/GDrive/GDocs/GSheets/GSlides, GitHub/Linear/Notion/Trello, Slack/Discord/Telegram/Email, MongoDB, and SSH as mountable resources.
Developer Integration and Framework Support
Mirage provides Python (≥3.12) and TypeScript (≥20) SDKs that embed directly into applications without requiring a separate process. Developers can integrate the filesystem into FastAPI servers, Express applications, or browser-based environments. The project supports major AI frameworks including OpenAI Agents SDK, Vercel AI SDK, LangChain, Pydantic AI, CAMEL, and OpenHands. A lightweight CLI also integrates with coding agents like Claude Code.
TypeScript examples in the README demonstrate creating workspaces with multiple mounts (/data as RAM, /s3 as S3 bucket, /slack as Slack workspace) and executing cross-service bash operations within a few lines of code.
Performance Features and Portable Workspaces
The system implements a two-layer caching architecture to minimize network calls against remote backends. Initial directory operations hit the API, but subsequent access serves from a local index until TTL expires. File reads similarly stream from origin on first access, then serve from cache for pipeline operations.
Workspaces are fully portable—developers can clone, snapshot, and version entire environments, moving agent runs between machines without restarting or losing state. This addresses a common pain point in distributed agent development.
Rapid Adoption Signals Developer Interest
The project's accumulation of 995 stars and 57 forks in just 36 hours indicates strong developer demand for filesystem-based abstractions in agent infrastructure. Released under Apache 2.0 license, Mirage positions itself as leveraging "the filesystem and bash vocabulary LLMs are most fluent in," directly competing with alternative approaches like Turso's AgentFS.
Key Takeaways
- Mirage gained 995 GitHub stars in 36 hours after launching May 6, 2026, with 57 forks indicating rapid developer adoption
- The system unifies 20+ backends (S3, Slack, Gmail, GitHub, MongoDB, etc.) under a single filesystem interface that works with standard bash commands
- Python and TypeScript SDKs embed directly into applications and support OpenAI Agents SDK, Vercel AI SDK, LangChain, Pydantic AI, CAMEL, and OpenHands
- Two-layer caching reduces network calls by serving repeated operations from local state, while portable workspaces enable moving agent runs between machines
- The bash-native approach eliminates the need for LLMs to learn new APIs, leveraging existing bash vocabulary already in model training data