Frequently Asked Questions

Everything you need to know about HealthClaw Guardrails, FHIR, and AI-safe clinical data access.

What is HealthClaw Guardrails?
What exactly is HealthClaw Guardrails?

HealthClaw Guardrails is an open-source security proxy that sits between any AI agent and any FHIR health data server. It enforces safety and compliance rules on every request — automatically, without changing your agent code or your FHIR server.

Think of it as a security guard that every AI health request must pass through. The guard redacts PHI, logs every access, requires extra authorization for writes, and blocks clinical changes until a human confirms them.

It's a healthclaw.io open source project.

Is this a FHIR server?

No. In local/demo mode it stores JSON blobs in SQLite — purely for testing the guardrail patterns. In production use, you point it at a real FHIR server (HAPI, Epic, Medplum, AWS HealthLake) and it acts as a transparent proxy that enforces guardrails on every request going through.

All guardrails — PHI redaction, audit trail, step-up auth, human-in-the-loop — apply regardless of whether you're using local mode or a real upstream server.

What FHIR versions are supported?

Two profiles:

  • FHIR R4 with US Core v9 — Stable, widely deployed US healthcare resources. AllergyIntolerance, Immunization, MedicationRequest, Procedure, DiagnosticReport, CarePlan, Coverage, and 13+ more types. These are safe for production use.
  • FHIR R6 v6.0.0-ballot3 — Experimental ballot resources: Permission, SubscriptionTopic, DeviceAlert, NutritionIntake. These are for research and may change before R6 final release.

Both flow through the same guardrail stack.

Security & PHI
What PHI is redacted and how?

Applied on every read path, including upstream FHIR server responses:

  • Names → truncated to initials (Maria Elena Rivera → M. E. Rivera)
  • Identifiers → masked to last 4 characters (MRN12345 → ***2345)
  • Addresses → stripped to city, state, country only
  • Birth dates → truncated to year only (1985-03-15 → 1985)
  • Phone / email / telecom → fully removed
  • Photos → removed

This is HIPAA Safe Harbor de-identification at the read layer. Data is stored unredacted; redaction happens at response time.

What is step-up authorization?

Reads are open (with tenant header). Writes require an additional HMAC-SHA256 signed token — the step-up token.

Tokens are generated at /r6/fhir/internal/step-up-token, signed with your STEP_UP_SECRET, include a 128-bit nonce, and expire after 5 minutes. They are tenant-bound — a token for tenant A cannot be used for tenant B.

This prevents an AI agent from autonomously writing clinical data without explicit human intent behind the token issuance.

What is human-in-the-loop enforcement?

Clinical resource writes (Observation, Condition, MedicationRequest, DiagnosticReport, AllergyIntolerance, Procedure, CarePlan, Immunization, NutritionIntake, DeviceAlert) return HTTP 428 Precondition Required unless the request includes the header X-Human-Confirmed: true.

This forces whatever system is issuing the write to explicitly set that header — meaning a human or human-authorized workflow confirmed the action. The AI agent cannot bypass this on its own.

Limitation: this is header-based, not cryptographic. A future version will use signed human-confirmation tokens.

Is the audit trail truly immutable?

Yes, at the application layer. SQLAlchemy event listeners fire on before_update and before_delete for AuditEventRecord and raise a RuntimeError, preventing any modification.

Every resource access — read, create, update, validate — writes an AuditEvent recording the tenant, agent, resource, and outcome. These cannot be modified through the application.

Note: a database administrator with direct SQL access could still modify records. For true immutability, store audit events in an append-only ledger or cloud audit service.

AI Agents & MCP
What is MCP and why does it matter for health AI?

Model Context Protocol (MCP) is an open standard that lets AI agents connect to tools and data sources in a structured, describable way. Instead of writing custom API integrations for every AI framework, you expose your capabilities as MCP tools and any MCP-compatible agent can use them.

For health AI, this is significant: it means an AI agent can be connected to FHIR health data without you having to trust that agent with raw API keys, unrestricted data access, or unaudited writes. HealthClaw Guardrails is the security layer that makes that connection safe.

What are the 12 MCP tools?

Read tools (no step-up required):

  • context.get — Retrieve a pre-built, time-limited context envelope
  • fhir.read — Read a resource (auto-redacted)
  • fhir.search — Search with patient, code, status, date filters
  • fhir.validate — Structural validation before any write
  • fhir.stats — Observation statistics (count/min/max/mean)
  • fhir.lastn — Most recent N observations per code
  • fhir.permission_evaluate — R6 access control with human-readable reasoning
  • fhir.subscription_topics — List SubscriptionTopics
  • curatr.evaluate — Data quality evaluation for patient health records

Write tools (require step-up token + human confirmation for clinical types):

  • fhir.propose_write — Validate and preview a write without committing
  • fhir.commit_write — Commit with full guardrail enforcement
  • curatr.apply_fix — Apply patient-approved data quality fixes with Provenance
What is Curatr?

Curatr is a patient-facing data quality skill. Your health records often contain coding errors — ICD-9 codes that were retired in 2015, missing required fields, or medication codes that don't match any known drug. These errors affect how AI tools interpret your health, and how insurers and providers see you.

Curatr evaluates your FHIR resources against live public terminology services (no account needed), explains issues in plain language with their real-world impact, and lets you approve fixes. Every approved fix creates a FHIR Provenance record with full attribution. It supports: Condition, AllergyIntolerance, MedicationRequest, Immunization, Procedure, DiagnosticReport.

The goal: your health data belongs to you. You should be able to see what's wrong with it and decide how to fix it.

Setup & Deployment
How do I connect it to my real FHIR server?

Set the FHIR_UPSTREAM_URL environment variable:

FHIR_UPSTREAM_URL=https://hapi.fhir.org/baseR4 python main.py

All guardrails remain active. Reads are fetched from upstream, redacted, and audited. Writes are validated locally first, then forwarded with step-up auth enforcement. Upstream URLs never leak to clients.

Tested with: HAPI FHIR R4/R5, SMART Health IT, Epic Sandbox.

What environment variables do I need?

Required in production:

  • STEP_UP_SECRET — HMAC secret for step-up tokens. Auto-generated on Vercel.
  • SQLALCHEMY_DATABASE_URI — Use PostgreSQL in production (default: SQLite)

Optional:

  • FHIR_UPSTREAM_URL — Connect to a real FHIR server
  • SESSION_SECRET — Flask session key
  • FHIR_UPSTREAM_TIMEOUT — Upstream request timeout in seconds (default 15)
  • REDIS_URL — For rate limiting and sessions (default: redis://redis:6379/0)
Can I use this in production?

The guardrail patterns are production-grade. The local storage mode (SQLite JSON blobs) is not — it lacks indexed search, horizontal scaling, and proper query performance. For production:

  • Use upstream proxy mode pointing at a real FHIR server
  • Set a strong STEP_UP_SECRET
  • Use PostgreSQL for the audit database
  • Enable Redis for rate limiting
  • Deploy behind HTTPS

The human-in-the-loop mechanism is currently header-based, not cryptographically signed. Treat it as a pattern demonstration rather than a production-grade confirmation mechanism.

Want to go deeper?

The Wiki covers architecture, guardrail patterns, FHIR concepts, and how each component works.

Read the Wiki Try the Dashboard GitHub