Developer guyb3 has launched OneCLI, a Rust-based credential proxy that prevents AI agents from accessing raw API keys while still allowing them to make authenticated requests. Posted to Hacker News on March 12, 2026, the project received 121 points and 38 comments, addressing a growing security concern as developers grant agents access to external services.
OneCLI Acts as Credential Injection Proxy for Agent Requests
OneCLI solves the API key exposure problem by storing real credentials in an encrypted vault while giving agents placeholder keys. According to the developers, "We built OneCLI because AI agents are being given raw API keys. And it's going about as well as you'd expect. We figured the answer isn't 'don't give agents access,' it's 'give them access without giving them secrets.'"
When an agent makes an HTTP request through OneCLI's proxy, the system matches the request by host and path, verifies the agent has appropriate permissions, swaps the placeholder for the real credential, and forwards the request. The agent never touches the actual secret, reducing the risk of credential exfiltration through prompt injection or other attacks.
Technical Architecture Uses Rust for Memory-Safe Credential Handling
OneCLI's architecture consists of three main components:
- Rust proxy: Provides fast, memory-safe HTTP gateway with MITM interception for HTTPS (port 10255)
- Next.js dashboard and API: Management interface running on port 10254
- Embedded Postgres (PGlite): No external database dependencies required
Credentials are encrypted at rest using AES-256-GCM and only decrypted at request time. The entire system runs in a single Docker container with a one-line installation: docker run --pull always -p 10254:10254 -p 10255:10255 -v onecli-data:/app/data ghcr.io/onecli/onecli.
Key features include host and path matching with pattern support, multi-agent support with scoped permissions per agent, and authentication via Proxy-Authorization headers. The system works with any agent framework that can set HTTPS_PROXY environment variables.
Community Questions Sandbox Isolation and Prompt Injection Risks
The top-voted comment on Hacker News raised a critical concern: "Giving a box API keys was always considered a risk. This system needs to run outside of the execution sandbox so that the agent can't just read the keys from the proxy process." Other commenters noted that while OneCLI prevents credential exfiltration, it doesn't prevent misuse of authorized services if an agent is compromised via prompt injection.
Several users pointed out that similar auth-proxy solutions have existed for years in enterprise environments. However, they acknowledged OneCLI's value for audit trails and simplified key rotation. The developers outlined their roadmap, stating: "The next layer is access policies and audit, defining what each agent can call, logging everything, and requiring human approval before sensitive actions go through."
Key Takeaways
- OneCLI is a Rust-based credential proxy that prevents AI agents from accessing raw API keys by injecting real credentials at request time
- The system uses AES-256-GCM encryption, embedded Postgres, and runs in a single Docker container with one-line installation
- Architecture includes a Rust gateway for credential injection (port 10255) and Next.js dashboard (port 10254) with multi-agent support
- Community concerns focus on sandbox isolation and the fact that OneCLI prevents credential exfiltration but not misuse of authorized services
- The project is Apache-2.0 licensed with a roadmap including access policies, comprehensive audit logging, and human approval workflows