CHORUS v1.0 Synthesis · Confidential · 2026-05-12

Lead your practice with Praxis AI

Clinical reasoning, applied. The strategic brief for SynthesisArc's medical AI product line.

Built for verified clinicians and the practices they serve

SynthesisArc NPI-Gated VITALS Engineered CME-Reimbursable Lane 3 GREEN
N
Prepared by NIGEL
For Breyon Bradford (CEO), Daniel Willitzer (CTO), Rick Anderson (Partner)
14
AI minds asked the same questions
Six research agents, six independent AI brains, and two of our cognitive entities, all working in parallel.
54,254
Words of evidence gathered
Pulled from public conversations, peer-reviewed studies, government data, and competitive intelligence.
707
Sources cited by name
Every claim in this brief points to a real published source. No folklore numbers. No "studies show" without naming the study.
3
Products mapped end-to-end
Prompt Packs, AI Residency, Personal Assistant, what each is, who it serves, what makes it defensible.
Executive Recommendation 8,092 words · 15 sections in full brief

Ship a specialty-modular Prompt Pack catalog ($47-$97) gated by NPI verification as the entry wedge. Layer the AI Residency ($497 self-paced) for practitioners building their first AI workflows, and the AI Fellowship ($997 cohort + $1,497 specialty tracks) for clinicians ready to specialize and lead AI adoption in their field. All three sit firmly in HIPAA training/education carve-out territory (Lane 3 GREEN zone) and never touch PHI in v1. Hold the Personal AI Assistant on a waitlist, it requires BAA stack, PHI redaction, and counsel sign-off that take real time to do right.

Module 01 · The Pain

Quantified Medical Workforce Pain

Before we talk about what to build, we need to look at what's actually breaking. Below is what fourteen AI sources agreed on after combing through public conversations, professional surveys, and government data: the specific, quantified pain that medical practitioners live with every day.

How to read this: the numbers on the right are real percentages, each tied to a named study. The red bars are the worst pain points (over 80% of practitioners affected). The two stats that matter most for our product line, 87% of physicians chart at home and 13 hours a week lost to insurance prior authorization, are the friction we attack first.

Editorial illustration of medical paperwork burden
Pain Signal Magnitude (% of practitioners)
Documentation Burden
1.4hrs/night
"Pajama time": physicians charting at home after their workday ends. (AMA Burnout 2024)
Prior Auth Burden
13hrs/week
Time spent on prior authorization per physician per week. 39-43 PAs per week. (AMA Prior Auth 2024)
Adverse Events from PA
29%
of physicians say prior authorization caused a serious adverse event in a patient under their care.
Strategic Signal

Replacement fear intensifies with AI knowledge: 88% of physicians worry about clinical skill atrophy; 75% worry GenAI threatens judgment. The "You drive. AI is the GPS." framing directly inverts that fear into permission to adopt. This is empirically validated, not a slogan.

Module 02 · The Competition

Where the Whitespace Lives

Most people assume medical AI is a crowded space. It isn't, at least not where we're playing. The big enterprise companies (Abridge, DAX Copilot) are fighting over hospital contracts. The training companies (Coursera, Stanford, Harvard) are either too cheap to seem credible or too expensive for individual practitioners. Right in the middle, there's nothing, and that's where we land.

How to read this: the chart below plots every meaningful competitor on price (left to right) and credibility (bottom to top). The purple zone is the white space, nothing credible exists between Coursera Plus ($399/year) and Harvard online ($3,100). Our AI Residency lives right in the middle of that gap. The blue dots are us. The gray dots are everyone else.

Market Tiers: Cheap to Premium

Where Everyone Lives, and Where We Don't

Below is every meaningful player on the price spectrum. The cheap end sells single prompts. The expensive end sells institutional CME. In the middle band, nothing credible exists, and that's where we land.

Single Prompts
$5 – $50

Notebook screenshots, basic templates

PromptBase medical $5
AIPRM medical $9
DataAnnotation Tech $50/hr (labor)
Online Education
$399 – $499

Generic medical AI courses, no specialty depth

Coursera Plus $399/yr
DeepLearning.AI Healthcare $499
🎯 OUR ZONE
The Whitespace
$497 – $1,497

Nothing credible exists in this band. SA lands here

SA Pro Prompt Pack $97
SA Bundle (3 packs) $297
AI Residency (self-paced) $497
AI Fellowship (cohort) $997
AI Fellowship (specialty tracks) $1,497
Elite CME
$1,500 – $3,100

Premium institutional credibility, slow, generalized

Stanford CME AI $1,500
Harvard Medical Online $3,100
Different market
Enterprise
$100K+ contracts

Different game, hospital-system sales motion

Abridge Enterprise
DAX Copilot Enterprise
Doximity AI Network/ads

The strategic insight: the band between $499 (where Coursera tops out) and $1,500 (where Stanford CME starts) is empty. Nobody has built credible, specialty-modular medical AI training in the $497–$1,497 price range, the exact band where individual practitioners pay for serious professional development. AI Residency anchors the foundational entry ($497 self-paced). AI Fellowship claims the advanced tier ($997 cohort + $1,497 specialty tracks). Same family, different levels of depth, exactly how medical training is already structured.

The Template
Heidi Health
2M weekly clinicians in 18 months on $96M raise. Practitioner-first PLG with freemium NPI gate.
→ Closest template for SA
The Playbook
OpenEvidence
760K physicians + $12B valuation in <3 years. 100% free, NPI-gated, content partnerships.
→ Distribution model to copy
The Cautionary Tale
Olive AI
Raised $902M. Collapsed 2023. Sold "AI transformation" instead of atomic per-clinician outcomes.
→ Never sell "transformation"
Competitive Landscape
Company Category Valuation Raised Threat Why it matters
Abridge Enterprise Scribe $5.3B $1.1B Low Different segment, enterprise, not practitioner. Path closed for SA capital-wise.
Heidi Health Practitioner-first Scribe n/a $96M Template CLOSEST template for SA. 2M weekly clinicians in 18 months. Practitioner-first PLG.
OpenEvidence Free Reference $12B n/a Template NPI-gated free tier playbook. 760K physicians in <3 years.
Doximity Physician Network Public n/a HIGH 85% of US physicians on network. If they pivot to training, instant competitor.
Microsoft Cloud for Healthcare Enterprise Platform Parent: $3T n/a HIGH Could bundle a free AI Residency with Copilot for Healthcare next quarter.
DAX Copilot (Microsoft/Nuance) Enterprise Scribe Parent: $3T n/a Low 600+ health systems. Different segment, enterprise scribe, not practitioner training.
Coursera Plus Education Public n/a Medium $399/yr unlimited. Adjacent, they could ship medical AI specialization.
Stanford CME / Harvard Medical Online CME Education Institutional n/a Medium $1,500-$3,100 range. Premium credibility but slow, not specialty-modular.
Olive AI Failed $0 $902M Cautionary Sold "AI transformation" instead of atomic per-clinician outcomes. Collapsed 2023.

What this means: the enterprise scribes (Abridge, DAX Copilot) are fighting over hospital contracts, not practitioners. They're not our competition. The two templates we copy openly are Heidi Health (practitioner-first scribe, 2M weekly clinicians on $96M raise) and OpenEvidence (NPI-gated free tier, 760K physicians in under three years). Doximity and Microsoft are the watch-list. If either pivots into medical AI training, they could move quickly. Olive AI is the warning sign: $902M raised, collapsed in 2023, because they sold "AI transformation" instead of fixing one specific problem for one specific person. We don't make that mistake.

Module 03 · The Regulatory Gate

What Ships When. Compliance Zones

Anything medical lives or dies on compliance. The good news: most of what we're shipping first doesn't touch the dangerous regulatory zones. The bad news: one wrong feature can drag us into FDA territory overnight. Below is the map. Green is "ship now," yellow is "ship after legal review," and red is "do not ship without a full FDA program."

How to read this: our Tier 1 Prompt Packs and Tier 2 AI Residency both sit firmly in the GREEN zone. They're training and education; they don't touch patient data; they don't make clinical decisions. The Personal Assistant (Tier 3) starts in yellow and inches toward red the moment it sounds like clinical judgment. That's why we move slowly on it.

GREEN

Ship today with Disclaimer Stack

  • Sales Academy-style training (AI Residency)
  • Non-PHI prompt packs with role/scope guardrails
  • Non-PHI marketing AI
  • Compliance via Disclaimer Stack + role-scoped guards
YELLOW

Ship with BAA + counsel sign-off

  • Anything ingesting PHI
  • Patient education with personalization
  • Workflow tools that process clinical data
  • Required: BAA template, counsel review, 1-3 months
RED

Do NOT ship without FDA + counsel program

  • Diagnostic suggestions
  • Treatment recommendations
  • Autonomous clinical decision-making
  • Medication dosage / drug interaction warnings
  • Required: FDA pre-submission, 6-12+ months
FDA Enforcement Precedent
Exer Labs (Feb 2025) · Warning Letter

First FDA enforcement on movement-tracking AI marketed without proper SaMD classification. The line: if AI suggests clinical interpretation, it's a device.

Purolea (Apr 2026) · Warning Letter

Differential-suggestion AI without 510(k). FDA cited Section 201(h) device definition. Class II/III device pathway required.

EU AI Act · August 2, 2026

High-risk healthcare AI classifications enforcement begins. Top penalty: €35M / 7% global revenue, verified to final Regulation (EU) 2024/1689 text. For SA: Tier 1 + Tier 2 likely qualify as training/education (not high-risk). Tier 3 Personal Assistant requires legal scoping before EU launch.

What this means: the FDA already started warning companies (Exer Labs Feb 2025, Purolea Apr 2026) the moment their AI suggested clinical interpretation. That precedent makes the GREEN zone the smart starting line. Prompt Packs and the Residency teach practitioners how to use AI, they never make clinical decisions. Same logic protects us in Europe: training and education isn't classified as high-risk under the EU AI Act. The Personal AI Assistant is held back specifically because the moment it sounds like clinical judgment, we'd be in FDA device territory and EU high-risk territory simultaneously. That's why it ships last, with counsel involved from day one.

Module 04 · The Positioning

"You Drive. AI Is the GPS."

Most medical professionals are scared of AI. Not because they don't see the value, but because they've been told for years that AI is coming for their jobs. That fear is the single biggest barrier to adoption. The good news: it has a simple cure. Think of AI the way you think of your GPS. You're still the driver. You still pick the destination. You still choose the route. The GPS just shows you the fastest way to get there, warns you about traffic, and knows shortcuts you didn't know. If your gut says take the back road, you take the back road.

Why this works: everyone has used a GPS. Nobody thinks the GPS is going to take their car. It guides; it doesn't drive. The analogy came from Shanel, the framing that made it click. When Breyon taught it in a class, students who walked in scared walked out excited. Same tool, same capabilities, different story. The frame matters more than the tech.

The Old Frame. Why It Fails

AI Drives, You Sit in the Back

  • 88% of physicians worry about clinical skill atrophy from AI use (AMA 2026)
  • 90% of residents perceive radiology replacement risk
  • 75% of clinicians worry GenAI threatens clinical judgment (Wolters Kluwer 2025)
  • Fear blocks purchase decision; fear blocks practice adoption; fear creates conservative bias
The New Frame. Why It Wins

AI as GPS

  • "AI shows the path, clinician picks the route, clinician drives." The operating architecture.
  • Practitioner identity preserved and enhanced: they're the driver, not the passenger
  • Inverts fear into permission: "this gives me better directions," not "this takes the wheel"
  • Marketing vocab: role-scoped, PHI-aware, draft-review-release, human accountable, evidence-first

What this means: the fear of being replaced is the largest single obstacle to AI adoption in medicine, and it grows stronger the more people learn about AI, not weaker. The GPS framing doesn't downplay AI's power. It just gives the practitioner the steering wheel back. Once they hear "you drive, AI is the GPS," the conversation shifts from "is this going to take my job" to "show me how to use it well." That single reframe is the unlock for adoption, for trust, and for every product we sell.

Real-World Validation (Shanel's framing, taught in Breyon's class)
"AI is like a brilliant GPS, and humans are like the driver. The human determines where things are going and the destination. AI is the world's best GPS, showing us the path. The driver still picks the route."
Shanel, 2026-05-12 · the analogy that made it click
"Everyone at the school was nervous because they're thinking AI is going to come in. We gave them a tool to help them prompt better. Now they don't feel threatened, they feel like, 'oh wow, I could get this tool and do even more.' The thought of replacing isn't even at play."
From the 2026-05-12 SynthesisArc meeting transcript
Module 05 · The Tier Stack

Three Tiers, One Ladder

Three products, one ladder. Each step proves the next can stand. We don't try to be everything to everyone. We start small, validate, and earn the right to grow.

How to read this: each tier has a score out of 50; that's our 5-axis ranking across consensus, regulatory feasibility, defensibility, speed, and strategic fit. The radar chart at the bottom shows how each tier scores on every axis. Tier 1 and Tier 2 score high across the board. Tier 3 scores high on long-term value but low on speed and regulatory readiness; that's why it ships last.

Tier 1 14/14 sources Score: 45/50

Medical Prompt Packs

$47-$97 per pack · $297 for the full bundle
Ships first, the wedge into the market
Individual practitioners, by role and specialty
What ships
  • Prior Authorization Pack
  • Family Medicine Pack
  • Front Desk Operations Pack
  • Nursing Handoff Pack
Tier 2 · Foundation 14/14 sources Score: 42/50

AI Residency

$497 self-paced
Opens after the packs are landing
Every clinical role, from front desk to physician
What ships
  • Foundations module: how to drive AI in your practice
  • Plain-language video curriculum
  • Pack workflows in context
  • Earned certificate of completion
Tier 2 · Advanced 12/14 sources Score: 40/50

AI Fellowship

$997 cohort · $1,497 specialty tracks
Opens once Residency cohorts complete
Physicians + advanced practitioners wanting specialty depth
What ships
  • Live cohort with peer review
  • Specialty tracks (Cardiology, FM, EM, Peds, Psych)
  • CME accreditation (reimbursable)
  • Senior practitioner mentorship
Tier 3 10/14 sources Score: 32/50

Personal AI Assistant

$19-$49/mo (with a permanently free tier for verified clinicians)
Builds slowly, ships when the workflows are clear
Practices and power users, practitioners who lean into AI daily
What ships
  • Waitlist landing page goes up alongside the Residency
  • Real beta only after legal counsel signs off
  • Built around what users repeat weekly, proven by Tier 1 + Tier 2
  • Free tier with NPI verification (OpenEvidence model)
5-Axis Recommendation Scoring (Source Consensus + Reg. Feasibility + Moat + Speed + Strategic Fit)
Module 06 · Inside the Packs

What's Actually In a Prompt Pack, and What Powers It

Each Prompt Pack is more than a list of prompts. It's a structured, role-scoped, clinician-verified toolkit built on top of an engine we already own. Below is what you get at each level, what powers it under the hood, and three real example prompts.

Why this matters: most "prompt packs" on the market are screenshots from someone's notebook. Ours are engineered, every prompt runs through a 4-phase quality cycle, every output has a verification step, and every pack is gated to verified medical professionals. This is the difference between a $9 PDF and a $97 toolkit a practice will use every day.

Level 1 · Starter
$47
One role, one specialty, the highest-leverage prompts only.
  • 8 production-ready prompts
  • Single role focus
  • Quick reference card
  • NPI verification at checkout
Level 2 · Pro
$97
Full workflow coverage for the role, the daily-driver pack.
  • 22 production prompts
  • Variation templates
  • Workflow chaining guides
  • Customer support included
Level 3 · Bundle
$297
Three Pro Packs across roles, the practice-wide bundle.
  • 3 Pro Packs combined
  • Cross-role handoff chains
  • Team training quick-start
  • Practice-level admin tools
Level 4 · Premium
$497
Full stack with quarterly updates + Academy access.
  • All current packs
  • Quarterly content updates
  • AI Residency Foundations included
  • Priority support + early access

What this means: the ladder lets a practitioner enter wherever their wallet and curiosity allow, then climb. A solo nurse buys a $47 Starter Pack to fix one workflow this week. A practice manager buys the $297 Bundle to cover three roles at once. A medical group buys the $497 Premium tier and folds in Residency access for the whole staff. Each rung is a complete product on its own. Nothing is locked behind "upgrade to the next tier to get the basics." That respect for the buyer is what makes the ladder feel premium instead of greedy.

The Engine Under the Hood Powered by VITALS

VITALS: Our Medical Prompt Engine

Every prompt in every pack runs through VITALS, a 4-phase engineering cycle we built (originally as PROMETHEUS Cycle, retuned for medical). It's the substrate. Practitioners never see VITALS directly; they see clean, working prompts. But it's what makes the difference between a notebook screenshot and a clinical-grade toolkit.

Phase 1
Create

Build a clear, structured base prompt using a universal XML skeleton, persona, objective, context, requirements, examples, output format.

Phase 2
Optimize

Run a flaw audit (5+ weaknesses), inject medical-grade safety boundaries, add a verification step, and score against accuracy/robustness criteria.

Phase 3
Deploy

Tune for the specific AI tool the practitioner uses (ChatGPT, Claude, Gemini), add escalation paths, lock the output format, and ship.

Phase 4
Iterate

Track what works in real practice, gather clinician feedback, refine quarterly. Every pack gets better over time.

Why this matters to a buyer: when a practitioner buys a VITALS-powered pack, they're not getting prompts that "worked once for someone." They're getting prompts that passed a verification gate, were red-teamed against medical edge cases, and have a clear escalation path when the AI doesn't know. That's the engineering difference no notebook PDF can match.

Try VITALS Full framework deep-dive, the six dimensions, the four shipped templates, the security model.

Five Examples From the Vault

Five real prompts (simplified for public view), drawn from packs across specialties. Each shows the VITALS engineering: persona, requirements, verification, escalation path. The first three handle workflow pain (Prior Auth, Nursing Handoff, Front Desk). The last two are the prompts every clinician asks for, evidence synthesis and patient education translation.

What this means: the five examples below aren't sample text. They're production-grade prompts with safety gates baked in. Look at the words REFUSE_AND_ESCALATE, TRACE_PASS, NO PRIMARY SOURCE FOUND, RESTART. Those aren't decoration. They're the structured checkpoints that make the difference between an AI that helps and an AI that quietly produces something a clinician can't defend. A practitioner running these prompts gets either a verified output or an explicit refusal with the failing check named, never a confident hallucination.

Pro Pack · Prior Authorization Compliance Watchdog Pattern

Prior Auth Letter Forge

For the practitioner drafting a prior authorization letter for an insurer. The prompt enforces ICD-10 anchoring, CPT verification, no-PHI-paraphrase, and an explicit "escalate to clinician" fallback.

COMPLIANCE WATCHDOG: Prior Auth Specialist v1.0

Every response must verify:
1. No diagnostic claim is made without an ICD-10 anchor
   present in the source chart.
2. No CPT code is suggested without checking the payer's
   published policy bulletin.
3. No PHI is paraphrased into the response that was not
   in the source input.
4. If the chart lacks evidence to support medical necessity,
   the response is "INSUFFICIENT EVIDENCE, escalate to
   clinician with these specific gaps: <list>"
5. If you detect yourself drifting toward generic-letter
   language, restart from <persona>.

If any check fails: return REFUSE_AND_ESCALATE with the
failing check named.
Pro Pack · Nursing Handoff 4-Stage Verification Pipeline

SBAR Handoff Verifier

For the nurse generating a shift handoff using SBAR (Situation, Background, Assessment, Recommendation). The prompt runs four named gates before the handoff ships: source trace, medication cross-check, risk surface, length and voice.

HANDOFF VERIFICATION PIPELINE: SBAR Generator v1.0

Generate a handoff note using SBAR structure.
Before emitting, run:

Stage 1, Source Trace
  Every assertion in S/B/A/R must trace to a timestamped
  entry in the source notes.
  Emit: TRACE_PASS or TRACE_FAIL:<list>

Stage 2, Medication Cross-Check
  Every medication mentioned must include: name, dose,
  route, frequency, last admin.
  If any field is missing or "unknown", that medication
  is dropped with a flag.
  Emit: MED_PASS or MED_FAIL:<list>

Stage 3, Risk Surface
  Allergies, code status, isolation status, fall risk,
  recent change in level of care must each be present
  even if "none documented". Never omit; only mark absent.
  Emit: RISK_PASS or RISK_FAIL:<list>

Stage 4, Length & Voice
  ≤ 200 words. SBAR-section-headed. No hedging.
  Emit: SHIP or RESTART:<reason>

Only after SHIP do you output the handoff note.
Starter Pack · Front Desk Voice Calibration Pattern

Insurance Verification Voice Script

For the front-desk team calling insurance to verify benefits before a patient visit. The prompt locks practice voice (tone, pace, language, personality), structures the call, and switches into sub-modes if the member is confused or upset.

FRONT-DESK VOICE: Insurance Verification v1.0

Voice settings (set per practice; default shown):
  Tone:        Approachable, lean 30% toward Authoritative
               for benefits discussions
  Pace:        Deliberate (callers are anxious; rushing = re-call)
  Language:    Simple, never say "deductible accumulator"
               or "OOP max"; say "how much you've already
               paid toward your yearly limit"
  Personality: Pragmatic with one moment of warmth in
               opening + closing

Script structure:
  1. Identify yourself + practice + reason for call
  2. Verify member ID and DOB (read back)
  3. Confirm: in-network status, copay, deductible-met,
     prior-auth requirements
  4. Read back what the patient owes today in plain language
  5. Confirm next-step appointment time + what to bring

If member says "I don't understand":
  switch to <plain_english_explainer> sub-prompt.
If member becomes upset:
  switch to <de_escalation> sub-prompt and offer warm transfer.
Pro Pack · Clinical Research Evidence Synthesizer Pattern

Clinical Literature Synthesizer

For any clinician needing a current-evidence summary on a clinical question. The prompt enforces named-source citation, GRADE evidence rating, explicit conflict-of-evidence surfacing, and a hard refusal to give individual patient advice or invent sources.

EVIDENCE SYNTHESIZER: Clinical Literature Brief v1.0

For a clinician needing a current-evidence summary on a clinical
question. Not a treatment recommendation. Not patient-specific advice.

Inputs:
  - Clinical question (must be answerable, not "what's good for X")
  - Patient population context (optional: age range, comorbidities)
  - Evidence horizon (default: last 5 years)

Hard rules:
  1. EVERY claim must cite a real source by name, year, journal,
     and PMID or DOI. No "studies show" without naming the study.
  2. Grade evidence quality with GRADE
     (HIGH / MODERATE / LOW / VERY LOW).
     Strength of recommendation separately.
  3. Surface conflicting evidence explicitly. If sources disagree,
     list both sides and note which is more recent / higher quality.
  4. If you cannot find a primary source for a specific claim,
     emit: NO PRIMARY SOURCE FOUND. Do NOT extrapolate.
  5. Include limitations: population, sample size, follow-up,
     conflict-of-interest if known.

Output format:
  - Bottom-line summary (2-3 sentences, plain language)
  - Key evidence (3-5 strongest studies, each GRADE-rated)
  - Conflicting evidence (if any)
  - What we don't know (the gaps)
  - Suggested next-step queries

Hard refusals:
  - Specific patient-care recommendation
       → REFUSE_AND_ESCALATE_TO_CLINICIAN
  - "Should I prescribe X?"
       → REFUSE: "I synthesize evidence; I don't recommend treatments.
         Apply with your clinical judgment and current guidelines."
  - "Diagnose this patient"
       → REFUSE_AND_ESCALATE_TO_CLINICIAN

Final gate: every cited source must include name + year + journal
+ identifier. If any citation is incomplete, return RESTART:<reason>.
Pro Pack · Patient Communication Plain-Language Translator Pattern

Patient Education Briefer

For any clinician who needs to explain a diagnosis, procedure, or medication regimen to a patient in plain language without dumbing down the medicine. The prompt locks reading level, mandates red-flag triggers, and refuses individual diagnostic claims.

PATIENT EDUCATION BRIEFER v1.0

For any clinical concept that needs to be explained to a patient
in plain language, without losing the medicine.

Inputs:
  - The clinical concept (diagnosis, procedure, or medication regimen)
  - Audience: patient self / caregiver / parent of pediatric patient
  - Patient context (age, reading level if known, language preference)
  - Length target (default: 200-300 words)

Reading-level lock:
  - Default:      Flesch-Kincaid grade 6 (US national average)
  - Caregivers:   grade 8 acceptable
  - Specialty:    grade 8 acceptable
  - NEVER above grade 10. NEVER medical jargon without a
    plain-English gloss in the same sentence.

Required structure (in order):
  1. ONE-SENTENCE explanation in plain English, no jargon.
  2. Why this matters for the patient specifically.
  3. What they can do (concrete actions, numbered).
  4. RED FLAGS, what would mean "call us back today."
  5. When to follow up.

Hard refusals:
  - Specific dose recommendation
       → REFUSE: "Follow the dose your provider prescribed."
  - Patient-specific diagnostic claim ("you have X")
       → REFUSE: explain the general concept only; diagnosis is
         the clinician's role.
  - Anything that could be confused with emergency symptoms
       → RED FLAGS section becomes mandatory; no exceptions.

If audience is pediatric caregiver: add a "talking to your child"
section with ONE age-appropriate analogy.

If concept involves medication: include "as your provider
prescribed" language; never give independent dosage guidance.

Final gate: re-read the output. Could a patient at the target
grade level understand every sentence?
If not, restart from <persona>.
What This Means For You

We didn't start from zero. VITALS is built on top of an existing prompt engineering substrate, the same RCA architecture and PROMETHEUS framework SynthesisArc already uses for every Tier-2 artifact internally. Adapting it for medical means re-skinning the personas, adding clinical safety boundaries, and writing the medical few-shot examples. The hard engineering work is done. The medical tuning is content work, and Leona and Brian are the perfect people to validate it.

Module 07 · The Path

First This, Then This, Then This

Plain language. No calendar. We move faster than industry norms anyway. The point is the sequence: which thing earns the right to ship next.

Editorial stepping stones illustrating phased path
First
Start here

Ship the Prompt Packs

We start by putting real, working prompts into the hands of front-desk staff, nurses, and physicians. Each pack solves one specific problem, like the 13 hours a week they lose to insurance prior authorization.

  • Prior Authorization Pack (the biggest pain we attack first)
  • Family Medicine Pack
  • Front Desk Operations Pack
  • Nursing Handoff Pack
Then
Right after

Open the AI Residency, then the Fellowship

Once practitioners are using our packs and seeing time saved, we open the training they're asking for. Two levels: AI Residency ($497 self-paced) gives every clinical role the foundation. AI Fellowship ($997 cohort + $1,497 specialty tracks) is the advanced, specialty-deep tier for physicians ready to lead AI adoption in their field. Same naming logic as residency-then-fellowship in clinical training, accessible foundation first, advanced specialty work second.

  • AI Residency Foundations ($497 self-paced), broad, every clinical role
  • AI Fellowship cohort ($997), live, peer-reviewed, premium
  • AI Fellowship specialty tracks ($1,497), Cardiology, FM, EM, Peds, Psych
  • Pursue CME accreditation on the Fellowship (makes it reimbursable)
Then
Once it earns its place

Build the Personal Assistant Carefully

Once we know which workflows people use weekly, we build the always-on assistant Rick has been dreaming about. We don't rush this one, the FDA cares deeply about anything that sounds like clinical judgment.

  • Waitlist landing page goes live alongside the Academy
  • Real beta only after legal counsel signs off
  • Built around the workflows we KNOW people repeat, not guesses
  • Free tier with NPI verification (like OpenEvidence)

Each step proves the next. Packs prove practitioner pain. Academy proves they'll learn. Assistant proves they'll stay. We don't skip ahead.

What this means: each step in the path proves the right to take the next one. The Prompt Packs prove practitioners actually have the pain we said they have, and that they'll pay to fix it. The Residency and Fellowship prove they'll come back for the deeper training. The Personal Assistant only ships once we can name the workflows people repeat every week, not guess at them. This is how we avoid Olive AI's failure mode. We don't sell "transformation"; we sell one fixed problem at a time, and each fix earns the budget for the next.

Module 08 · The Load-Bearing Tension

Where the AIs Disagreed

Should we ship the Personal AI Assistant early as a private beta, or hold it as a waitlist while we prove Tier 1 and Tier 2 retention?

What this means: not every recommendation has unanimous agreement. When the AI sources split, that's actually valuable: it surfaces the strategic decisions only a human can make. Below are the two positions, with the reasoning behind each. NIGEL's recommendation is at the bottom.

Position A 2 sources

SHIP IN 90 DAYS

Grok 4 · GPT-5
  • Market window, first-mover advantage in voice-first medical AI
  • Subscription revenue starts earlier in the funnel
  • Forces development discipline and product-market fit testing
  • Rick is fixated on the watch-app paradigm; momentum matters
Position B 4 sources

DEFER TO Q4 2026

Gemini 2.5 Pro · Aasia CE (CNO) · Vic CE (CSO) · Lane 3 Regulatory Analysis
  • No-PHI beta still creates FDA SaMD adjacency the moment voice reads like clinical judgment
  • Expectation debt before Tier 1 + Tier 2 prove retention
  • BAA stack + PHI redaction + counsel sign-off take real time; rushing them creates regulatory exposure
  • Aasia: "Earn the subscription through repeated workflow proof"
NIGEL Synthesizer Recommendation

Position B, hold and prove. The waitlist landing page goes up alongside the Academy so we capture interest immediately. The real beta opens once Tier 1 + Tier 2 prove what people actually repeat weekly, and only after legal counsel signs off.

What this means: when the AIs split, that's the moment a human has to choose. Two strong models (Grok 4, GPT-5) said ship the Assistant in 90 days for first-mover advantage. Four sources, including our own clinical and security CEs and the regulatory deep-dive, said wait. The wait-and-prove side wins because the cost of shipping early and getting a warning letter is catastrophic, while the cost of waiting and putting up a waitlist page is near-zero. We capture interest now, ship when the workflows are proven, and never let a "move fast" instinct cost us the brand.

Module 09 · The Decisions

Seven Operator-Only Calls

There are some decisions no amount of research can make for us. These seven require human judgment about partnerships, equity, brand, and direction. They're listed in priority order.

How to read this: the red badges mark decisions that need to happen first: domain, entity structure, contractual roles for Leona and Brian, and the first four specialties to build packs for. The yellow and green ones are slightly less urgent but still meaningful.

# Decision Detail Urgency
01 Domain decision medicalaipilot.synthesisarc.com vs. separate brand (e.g., aipilot.health) Highest
02 Entity structure Separate LLC/DBA under SA umbrella (Daniel's proposal) vs. division-of-SA Highest
03 Leona contractual role Advisor, partner, equity, founding clinician? Highest
04 Brian contractual role Advisor, partner, equity, founding physician? Highest
05 First pack author Leona writes? NIGEL drafts + Leona reviews? AI-drafted + practitioner-validated? High
06 Academy price anchor $497 self-paced + $997 cohort confirmed, or A/B test? Medium
07 First 4 specialties Prior Auth + Family Med + Front Desk + Nursing Handoff confirmed, or adjust? Highest
Module 10 · CHORUS Consensus

The Eight High-Confidence Findings

When fourteen independent AI sources, each trained on different data, with different biases, with different reasoning patterns, all agree on the same finding, the signal is strong. Below are the eight findings that earned that kind of convergence.

How to read this: each circle shows how many of the 14 sources supported that finding. Green circles mean unanimous or near-unanimous agreement. Cyan and yellow mean a strong majority. These are the findings we treat as load-bearing: we build the rest of the strategy on top of them.

14/14
100%
CONSENSUS

"You drive. AI is the GPS." is the correct master frame

Empirically supported. Replacement fear intensifies WITH AI knowledge (AMA 2026, Wolters Kluwer 2025). The GPS framing inverts the fear into permission to adopt.

All 14 sources
14/14
100%
CONSENSUS

Individual practitioners FIRST, companies later

Practitioner-first PLG took Heidi to 2M weekly clinicians on $96M and Freed to 26,000 paying clinicians with near-zero CAC.

All 14 sources
9/14
64%
EMERGING

Documentation burden + prior auth = most monetizable pain

87% of physicians chart at home; 89% say PA increases burnout; 13 hrs/wk PA burden; $35B annual PA admin spend.

Lanes 1, 2, 6, 8, 9, 11, 12, 13, 14
6/14
43%
SINGLE-SOURCE

NPI/license verification at signup = trust moat

OpenEvidence: 760K registered US physicians in <3 years at $12B valuation, 100% NPI-gated. Verification is conversion, not friction.

Lanes 2, 6, 12, 13, 14, plus Aasia CE
5/14
36%
SINGLE-SOURCE

Freemium + verified-clinician free tier non-negotiable for Tier 3

With Doximity Scribe and OpenEvidence permanently free, a paid pure-scribe product is structurally dead on arrival.

Lanes 2, 6, 11, 12, 13
7/14
50%
EMERGING

Avoid FDA SaMD + HIPAA BAA exposure in v1

One feature that crosses into "differential diagnosis" triggers FDA Class II/III device pathway (Exer Labs precedent Feb 2025).

Lane 3 + 6 LLM sources
3/14
21%
SINGLE-SOURCE

Existing SA infrastructure can ship v1 in 4-6 weeks

lib-prompts-rca (Breyon's own Prometheus predecessor) + Sales Academy template + INSIGHTS engine + 4 direct-apply CEs.

Lanes 4, 5, 10 (subagents)
5/14
36%
SINGLE-SOURCE

DataAnnotation $50-60/hr AI Trainer pool = qualified lead funnel

100K+ contractors globally already self-identified as AI-curious. Warmest possible top-of-funnel for the AI Residency.

Lanes 2, 8, 9, 11, 12, 13
About · CHORUS v1.0

How This Brief Was Built

CHORUS, multi-AI deep research and strategic triangulation. The same brief dispatched to 14 independent AI systems, synthesized into a single recommendation with consensus/divergence mapping.

🧠
3 Frontier LLMs
Grok 4 · Gemini 2.5 Pro · GPT-5
📚
7 Claude Subagents
Pain · Comp · Reg · Stack · CE · Success · GitHub
3 Deep Research Passes
Grok · Gemini · GPT with web grounding
🎯
2 Manus CEs
Aasia (CNO) · Vic (CSO) under UCS
Total Research
54,254
words across 14 sources
Citations
707
primary-publisher URLs (Rule 47)
Wall Time
~90min
all 14 dispatches in parallel

What this means: a single AI is one opinion. Fourteen AIs, each trained on different data and reasoning patterns, are a panel, and when they converge, the signal is strong. CHORUS asked the same strategic questions to three frontier LLMs (Grok, Gemini, GPT-5), seven Claude subagents, three deep-research passes, and two of our cognitive entities running under SynthesisArc's Unified Cognitive Substrate. The convergences became our load-bearing findings. The disagreements became the tension points only a human can resolve. This brief isn't NIGEL's opinion. It's the synthesis of fourteen.

The Name · Praxis

Why "Praxis"?

Every product line earns its name. This one earns it twice: once on the etymology, once on the clinical vocabulary. Below is why the word lands, what it signals about the product, and the risks worth knowing before we commit.

The word itself

Praxis is Greek (πρᾶξις), meaning "action" or "doing," but specifically informed action. Aristotle separated it from theoria (pure knowing) and poiesis (making things). Praxis was the third thing: knowledge made real through practice. The integration of understanding and doing.

Why it lands hard in medicine

Doctors don't do medicine, they practice it. A physician's "practice" is literally their praxis. In neurology, apraxia is the clinical term for losing the ability to perform skilled movements despite intact understanding. So praxis is already a medical word, naming the exact thing the AI supports: the bridge between what you know and what you do at the bedside.

What it signals about the product

This is the killer move. The name doesn't position the AI as a tool for the practitioner; it positions the AI as a participant in the practitioner's praxis. Shanel's practitioner-first framing doesn't just survive here; it gets stronger. Doctors don't suddenly become "AI users." They keep practicing. The AI is woven into the practice itself.

Premium without being precious

Two syllables, hard consonants, clean wordmark. Sits naturally next to brands like Notion, Stripe, Linear, Arc. Short, owned, ungeneric. No "AI" suffix needed; the brand is the word.

The Risks Worth Knowing
  • Praxis is in use elsewhere.

    The Praxis teacher certification test is the biggest namesake collision. Worth a trademark search in medical and tech classes before we commit.

  • Slightly opaque on the tin.

    "Wayfinder" tells you what it does in one syllable; Praxis needs a tagline. That's a feature in branding (more memorable, more headspace) and a bug in cold marketing (more explanation up front).

  • Reads "smart".

    For some clinicians that's premium and respected; for others it's egghead. Depends entirely on target audience. Skews academic-medicine, less DTC.

How It Extends
  • Tagline candidates
    • · Clinical reasoning, applied.
    • · The practice, elevated.
    • · Where knowing meets doing.
  • Family architecture

    Praxis Consult, Praxis Rounds, Praxis Notes. The name can host an ecosystem if the product scales.

  • Visual lane

    Serif wordmark reads clinical authority; sans reads modern tool. Probably sans with a single distinguishing letterform. The x is the visual handle.

The bet

We aren't naming a product. We're naming a category of behavior the practitioner already owns. Praxis is what doctors and nurses already do. The AI's job is to make their praxis sharper, faster, and safer. The name and the product say the same thing.