The Documentation Burden
Ask any orthopedic surgeon what they like least about their job. The answer is almost never surgery itself. It is usually:
"Charting."
Assessment notes, SOAP documentation, follow-up reports, quality submissions. These consume hours each week, yet every entry must be precise. You cannot afford mistakes, but you also cannot afford the time it takes to write everything from scratch.
This is the problem Doctor AI was built to solve.
Phase 1: AI That Reads
Earlier in 2026, we shipped Doctor AI Phase 1. Physicians connect their preferred AI tool (Claude Code, Gemini CLI, Codex CLI, or any MCP-compatible client) via an API Token, and the AI can:
- Read patient VAS trends, adherence rates, and PROM scores
- Generate weekly summaries and trend analyses
- Answer questions like "Which patients have declining adherence?" or "Is this patient recovering on track?"
Phase 1 was read-only. AI could observe, but could not modify anything. PII (national ID, phone, email) was automatically stripped before reaching the AI.
This worked well. But physicians started asking: "The AI already has all the context — can it write the SOAP notes for me?"
Phase 2: AI That Writes (Drafts Only)
Phase 2 opens write access. AI can now draft clinical assessment notes (Assessments) for the physician.
But here is the critical design decision:
AI writes are saved as drafts. They never auto-publish.
We call this principle draft-only enforcement.
Why Not Let AI Publish Directly?
Because clinical decision accountability belongs to the physician.
AI can look at a VAS trend and suggest "pain improving, recommend phase advancement." But it does not know that the patient walked into the clinic today looking uncomfortable. It does not know the patient fell yesterday. It does not know the patient has psychological resistance to a specific exercise.
These contextual signals are not yet fully available to AI. The correct workflow is: AI provides a draft, the physician reviews it in 30 seconds, edits as needed, and confirms. Not: AI writes directly into the official record.
How It Works in Practice
A Clinic Visit Scenario
The patient just finished ROM measurement. The physician types one line into their AI tool:
"Mr. Wang, 6 weeks post-op TKA, ROM 120/0, walking with minimal pain, ready to advance."
The AI does three things:
- Reads patient data automatically — pulls VAS trends, PROM scores, exercise adherence, and the last assessment record
- Fills in the form — enters ROM 120/0 into flexion/extension fields, sets progression to "advance," pre-fills S/O/A/P notes using trend data
- Asks about what was not mentioned — "You did not mention effusion grade or post-exercise VAS. Should I include them? Last assessment was trace effusion, VAS 3."
The physician replies "effusion resolved, VAS 2." The AI completes the draft and submits it via MCP Server.
In the Doctor PWA
- A new entry appears in the assessment list with a purple badge — meaning "AI draft, needs confirmation"
- The physician opens it and sees the complete form: S/O/A/P + ROM + VAS + effusion + progression decision
- Confirms as-is (or edits) → taps "Confirm" → the record becomes official. Phase advancement executes only at this moment
The design goal: the physician says two sentences, and AI fills out the form. What used to be a dozen manual field selections becomes one conversation.
Technical Architecture
| Layer | Description |
|---|---|
| MCP Server v2.0.0 | 2 write tools (draft_assessment, draft_prescription) + 6 read tools (trends, alerts, PROM, etc.) |
| Default-deny API | Allowlist-based — only explicitly listed endpoints are accessible via AI Token |
| Draft-only enforcement | At the API layer: AI Tokens can only write records with status=draft. Setting status=published is rejected |
| Scope management | Physicians explicitly authorize write permissions in the Doctor PWA Token Scope UI |
Safety Boundaries
We define explicit boundaries for what Doctor AI can and cannot do:
AI Can
- Read patient rehabilitation data (VAS, PROM, exercise logs, assessment history)
- Draft assessments from brief physician instructions (SOAP notes + ROM + VAS + effusion + progression)
- Draft exercise prescriptions (select exercises from library, set sets/reps)
- Ask follow-up questions about missing fields (this is the AI tool's natural conversational ability, not a server feature)
- Generate trend analyses and weekly reports
AI Cannot
- Publish any record directly (physician confirmation required)
- Execute phase advancement directly (deferred until physician confirms the draft)
- Draft surgical records or billing entries (currently limited to assessments and prescriptions)
- Access PII (national ID, phone, email are stripped automatically)
- Access patients outside the physician's authorized scope
- Diagnose independently or auto-prescribe
What If AI Gets It Wrong?
Nothing happens — because it is a draft. If the physician spots any issue before confirming, they delete or edit it. Drafts do not enter the patient's official record, do not affect PROM scheduling, and do not trigger any clinical workflows.
The cost of a bad draft is zero. The cost of a bad auto-published note could be significant. This asymmetry is exactly why draft-only enforcement exists.
BYO-LLM: No Lock-In, No Hosting
Another deliberate design choice: iRehab does not embed a specific AI chat interface.
We provide a standard MCP Server and API Token interface. Physicians choose their own AI tool. Claude Code, Gemini CLI, Codex CLI, local models — all work.
The reasoning:
- AI models turn over every 6 months — binding to a specific vendor is short-sighted
- Data sovereignty — the physician's choice of AI provider determines whose servers process the data. Enterprise tiers typically do not retain data
- Cost — different AI providers have different pricing. Physicians should have the choice
iRehab's role is to provide a secure data access layer, not to become an AI vendor.
Just Talk: Voice Input + AI Form Filling
A common question: "I do not want to type. Can I just speak?"
Yes, and no extra setup is required.
iPhone, Mac, and Android all have built-in dictation — tap the microphone key on any keyboard. The physician taps the mic in their AI tool's input field, speaks, sends, and the AI maps natural language to structured form fields.
No model "training" is needed. The MCP Server defines a schema for each field (ROM flexion/extension, VAS 0-10, effusion grade, etc.). The LLM reads this schema and knows how to map speech to fields. "Flexion one-twenty, extension zero" becomes kneeFlexion: 120, kneeExtension: 0 automatically.
Define Your Shortcuts with CLAUDE.md
If you use Claude Code, you can write your preferred shorthand in the project's CLAUDE.md file:
# My shorthand
- "advance" or "ready to progress" = progressionDecision: advance
- "step back" = progressionDecision: regress
- "swollen" = effusionGrade — ask me for severity
- If I don't mention a field, always ask — never guess
The AI follows these rules every session — effectively natural language macros. Other AI tools have similar system prompt configuration options.
Current Limitations
- System dictation handles everyday language well, but English medical abbreviations (ROM, VAS, TKA) may occasionally be misrecognized
- No real-time streaming — you speak, send, then AI processes (not live transcription into fields)
- Voice input quality depends on your device and environment, not on iRehab
Trust But Verify
The name "draft-only enforcement" borrows from an old principle: trust but verify.
We trust AI's capability — it genuinely produces reasonable clinical assessment drafts based on patient data. But we also verify — every draft must pass through a human physician's eyes and judgment before it becomes real.
This is not distrust of AI. It is a commitment to keeping a human in the loop for clinical decisions.
As AI reliability improves over time, draft-only is a starting point that can be gradually relaxed. But at launch, we err on the side of caution. The history of medical technology teaches us that conservative rollouts with clear safety boundaries earn more trust than aggressive ones that occasionally fail.
Two Sentences to Fill a Form
Back to the original problem: the documentation burden.
With Doctor AI Phase 2, the clinic visit workflow changes from "open form, manually fill each field, save" to "tell AI two sentences, answer one follow-up, confirm." The physician does not need to remember where each field is or write SOAP notes from scratch. AI already knows the patient's trends — it just needs today's observations from you.
The time saved goes to what matters more — like spending an extra minute talking to the patient.
To try Doctor AI, generate a Token from Doctor PWA → Profile → API Token, and connect your preferred AI tool. Setup takes 3 minutes.
Full setup guide: denovortho.com/irehab/ai-setup
