Everything you need to know about HealthClaw Guardrails, FHIR, and AI-safe clinical data access.
HealthClaw Guardrails is an open-source security proxy that sits between any AI agent and any FHIR health data server. It enforces safety and compliance rules on every request — automatically, without changing your agent code or your FHIR server.
Think of it as a security guard that every AI health request must pass through. The guard redacts PHI, logs every access, requires extra authorization for writes, and blocks clinical changes until a human confirms them.
It's a healthclaw.io open source project.
No. In local/demo mode it stores JSON blobs in SQLite — purely for testing the guardrail patterns. In production use, you point it at a real FHIR server (HAPI, Epic, Medplum, AWS HealthLake) and it acts as a transparent proxy that enforces guardrails on every request going through.
All guardrails — PHI redaction, audit trail, step-up auth, human-in-the-loop — apply regardless of whether you're using local mode or a real upstream server.
Two profiles:
Both flow through the same guardrail stack.
Applied on every read path, including upstream FHIR server responses:
This is HIPAA Safe Harbor de-identification at the read layer. Data is stored unredacted; redaction happens at response time.
Reads are open (with tenant header). Writes require an additional HMAC-SHA256 signed token — the step-up token.
Tokens are generated at /r6/fhir/internal/step-up-token, signed with your STEP_UP_SECRET, include a 128-bit nonce, and expire after 5 minutes. They are tenant-bound — a token for tenant A cannot be used for tenant B.
This prevents an AI agent from autonomously writing clinical data without explicit human intent behind the token issuance.
Clinical resource writes (Observation, Condition, MedicationRequest, DiagnosticReport, AllergyIntolerance, Procedure, CarePlan, Immunization, NutritionIntake, DeviceAlert) return HTTP 428 Precondition Required unless the request includes the header X-Human-Confirmed: true.
This forces whatever system is issuing the write to explicitly set that header — meaning a human or human-authorized workflow confirmed the action. The AI agent cannot bypass this on its own.
Limitation: this is header-based, not cryptographic. A future version will use signed human-confirmation tokens.
Yes, at the application layer. SQLAlchemy event listeners fire on before_update and before_delete for AuditEventRecord and raise a RuntimeError, preventing any modification.
Every resource access — read, create, update, validate — writes an AuditEvent recording the tenant, agent, resource, and outcome. These cannot be modified through the application.
Note: a database administrator with direct SQL access could still modify records. For true immutability, store audit events in an append-only ledger or cloud audit service.
Model Context Protocol (MCP) is an open standard that lets AI agents connect to tools and data sources in a structured, describable way. Instead of writing custom API integrations for every AI framework, you expose your capabilities as MCP tools and any MCP-compatible agent can use them.
For health AI, this is significant: it means an AI agent can be connected to FHIR health data without you having to trust that agent with raw API keys, unrestricted data access, or unaudited writes. HealthClaw Guardrails is the security layer that makes that connection safe.
Read tools (no step-up required):
context.get — Retrieve a pre-built, time-limited context envelopefhir.read — Read a resource (auto-redacted)fhir.search — Search with patient, code, status, date filtersfhir.validate — Structural validation before any writefhir.stats — Observation statistics (count/min/max/mean)fhir.lastn — Most recent N observations per codefhir.permission_evaluate — R6 access control with human-readable reasoningfhir.subscription_topics — List SubscriptionTopicscuratr.evaluate — Data quality evaluation for patient health recordsWrite tools (require step-up token + human confirmation for clinical types):
fhir.propose_write — Validate and preview a write without committingfhir.commit_write — Commit with full guardrail enforcementcuratr.apply_fix — Apply patient-approved data quality fixes with ProvenanceCuratr is a patient-facing data quality skill. Your health records often contain coding errors — ICD-9 codes that were retired in 2015, missing required fields, or medication codes that don't match any known drug. These errors affect how AI tools interpret your health, and how insurers and providers see you.
Curatr evaluates your FHIR resources against live public terminology services (no account needed), explains issues in plain language with their real-world impact, and lets you approve fixes. Every approved fix creates a FHIR Provenance record with full attribution. It supports: Condition, AllergyIntolerance, MedicationRequest, Immunization, Procedure, DiagnosticReport.
The goal: your health data belongs to you. You should be able to see what's wrong with it and decide how to fix it.
Set the FHIR_UPSTREAM_URL environment variable:
FHIR_UPSTREAM_URL=https://hapi.fhir.org/baseR4 python main.py
All guardrails remain active. Reads are fetched from upstream, redacted, and audited. Writes are validated locally first, then forwarded with step-up auth enforcement. Upstream URLs never leak to clients.
Tested with: HAPI FHIR R4/R5, SMART Health IT, Epic Sandbox.
Required in production:
STEP_UP_SECRET — HMAC secret for step-up tokens. Auto-generated on Vercel.SQLALCHEMY_DATABASE_URI — Use PostgreSQL in production (default: SQLite)Optional:
FHIR_UPSTREAM_URL — Connect to a real FHIR serverSESSION_SECRET — Flask session keyFHIR_UPSTREAM_TIMEOUT — Upstream request timeout in seconds (default 15)REDIS_URL — For rate limiting and sessions (default: redis://redis:6379/0)The guardrail patterns are production-grade. The local storage mode (SQLite JSON blobs) is not — it lacks indexed search, horizontal scaling, and proper query performance. For production:
STEP_UP_SECRETThe human-in-the-loop mechanism is currently header-based, not cryptographically signed. Treat it as a pattern demonstration rather than a production-grade confirmation mechanism.
The Wiki covers architecture, guardrail patterns, FHIR concepts, and how each component works.