A new analytics dashboard called Rudel has provided the first comprehensive look at real-world AI coding agent usage patterns, analyzing 1,573 actual Claude Code sessions. The open-source tool, built by the obsessiondb organization and launched on Hacker News on March 12, 2026, reveals surprising inefficiencies in how developers use AI coding assistants.
Skills Feature Remains Largely Unused in Real Sessions
The most striking finding from Rudel's dataset shows that Claude Code's skills feature was used in only 4% of sessions. Skills allow developers to give Claude specialized capabilities and context, but the data suggests most users either don't know about the feature or haven't integrated it into their workflow. This represents a significant gap between available functionality and actual usage patterns.
One Quarter of Sessions Are Abandoned Within First Minute
Rudel's analysis found that 26% of all Claude Code sessions are abandoned, with most abandonments occurring within the first 60 seconds. The developers identified "error cascade patterns" that appear in the first 2 minutes of a session and can predict abandonment with reasonable accuracy. Session success rates vary significantly by task type, with documentation tasks scoring highest and refactoring tasks showing the lowest success rates.
How Rudel Captures and Processes Session Data
Users install Rudel via npm and run a single command that registers a Claude Code hook. When sessions end, the hook automatically uploads transcripts to Rudel's backend, where they're stored in ClickHouse and processed into analytics. Each upload includes:
- Session IDs and timestamps
- User and organization identifiers
- Project paths and git context (repository, branch, SHA, remote)
- Complete session transcripts
- Sub-agent usage metrics
The developers emphasize that uploaded data "may contain sensitive material, including source code, prompts, tool output, file contents, command output, URLs, and secrets." Despite this warning, the tool is free to use and fully open source, with the GitHub repository showing 179 commits and 35 stars.
Building the First Benchmark for Agentic Coding Performance
The Rudel team notes that "there is no meaningful benchmark for 'good' agentic session performance" and positions their dataset as the foundation for building one. This represents one of the first public datasets analyzing real-world AI coding agent usage patterns at scale, filling a critical gap in understanding how developers actually interact with AI coding assistants.
Key Takeaways
- Skills feature in Claude Code is used in only 4% of sessions despite being available
- 26% of all Claude Code sessions are abandoned, most within the first 60 seconds
- Error patterns in the first 2 minutes can predict session abandonment with reasonable accuracy
- Session success rates vary significantly by task type, with documentation performing best and refactoring worst
- Rudel represents the first public dataset of real AI coding agent usage at scale, with 1,573 sessions analyzed