Security researchers discovered a patient-facing medical RAG chatbot exposing the 1,000 most recent patient conversations through ordinary browser inspection tools, requiring no authentication or specialized hacking skills. The anonymized case study, published May 1, 2026 on arXiv, reveals how basic Chrome Developer Tools exposed sensitive health information, system configurations, and backend infrastructure.
Standard Browser Tools Reveal Extensive Medical Data Exposure
Researchers Alfredo Madrid-García and Miguel Rujas used a two-stage assessment approach: first employing Claude Opus 4.6 for exploratory prompt-based testing and hypothesis generation, then manual verification using Chrome Developer Tools to inspect browser-visible network traffic, payloads, and stored interaction data. The non-destructive assessment of a publicly accessible medical chatbot revealed alarming security gaps.
The exposed data included:
- System prompt and RAG configuration details
- Model and embedding configuration parameters
- Retrieval parameters and backend endpoints
- Complete API schema documentation
- Document and chunk metadata from the knowledge base
- Full knowledge-base content
- The 1,000 most recent patient-chatbot conversations including health-related queries
Privacy Assurances Contradicted by Actual Implementation
The deployment directly contradicted its own privacy assurances. Full conversation records, including patients' actual health questions and chatbot responses, were accessible to anyone with basic browser inspection skills. The researchers emphasize: "Serious privacy and security failures in patient-facing RAG chatbots can be identified with standard browser tools, without specialist skills or authentication; independent review should be a prerequisite for deployment."
A particularly concerning finding involves the dual use of AI tools: "Commercial LLMs accelerated this assessment, including under a false developer persona; assistance available to auditors is equally available to adversaries." This means attackers can leverage the same AI capabilities to systematically identify and exploit vulnerabilities in medical AI systems.
Independent Security Review Should Be Mandatory for Medical AI
The research highlights that AI-assisted development lowers barriers to building chatbots but does not ensure security. RAG systems demand rigorous security, privacy, and governance controls, particularly in healthcare contexts. The study calls for mandatory independent security review before deployment of any patient-facing AI system. The ease of discovery using standard tools suggests many similar deployments may harbor comparable vulnerabilities.
Key Takeaways
- A patient-facing medical RAG chatbot exposed 1,000 recent patient conversations through ordinary browser inspection tools without requiring authentication
- Standard Chrome Developer Tools revealed system prompts, API schemas, backend endpoints, and complete conversation histories containing health information
- The deployment contradicted its own privacy assurances, making sensitive patient health queries accessible to anyone with basic technical knowledge
- Researchers used Claude Opus 4.6 to accelerate vulnerability discovery, demonstrating that adversaries can use the same AI tools to find security flaws
- The study calls for mandatory independent security review before deployment of patient-facing AI systems