Cerebro Clinical-Grade AI Deployment Playbook
From IRB → FDA → Revenue (Without Burning Time or Trust)
A field manual for builders operating at the boundary between innovation and responsibility in healthcare AI.
Preface
This playbook is written for builders operating at the boundary between innovation and responsibility.
It assumes technical competence and clinical awareness. It does not teach machine learning. It does not attempt to replace regulatory counsel. It does not attempt to persuade anyone that AI is inevitable.
Its purpose is narrower and more useful: to transfer the operational judgment required to deploy AI into clinical reality—organizational, regulatory, and economic—without collapsing under ambition.
Healthcare does not reject innovation. It rejects ambiguity, unmanaged risk, and systems that ask institutions to absorb uncertainty without consent.
Author's Introduction
Lessons Learned, Not Dogma
If you are building clinical AI and you believe the model is the hard part, you are still at the surface.
The model is rarely what kills a healthcare AI program. What kills it is everything around the model: ownership that no one will claim, language that drifts, change that isn't controlled, and workflow friction that compounds until adoption quietly dies.
Underneath it all is a simple institutional truth: no one wants to carry risk they did not consent to carry. Most teams don't fail loudly. They stall—after the pilot, before permanence—while everyone stays polite.
I wrote this playbook as a working field manual from the front lines.
It's a synthesis of what we've learned building and deploying clinical-grade AI at Cerebro NeuroTech—across research, product development, and real-world application environments where patients, clinicians, and institutions have no patience for ambiguity.
It is not dogma. It's a set of inferences, hypotheses, and operational patterns that have held up under pressure, and it will evolve as the work evolves.
If you are a founder, engineer, clinician, compliance leader, or investor trying to move from "promising demo" to "trusted clinical workflow," you should read this—at minimum to avoid expensive, predictable mistakes.
Use it like a map of hazards, not a treasure map. The goal isn't the city of gold. The goal is permanence.
— Paolo Alejandro Catilo, CEO & Chief Engineer, Cerebro NeuroTech, Inc.
How to Use This Playbook
Use this document as an operator's manual.
It is designed to be read in sequence once, then used as a reference during execution.
01
Recommended First Read (10–15 minutes)
Read Claims Boundary Box, Evidence Ladder, and SaMD Labeling Posture. Read Sections 1–2 to lock posture and language. Read Sections 3–6 to stabilize evidence strategy, data governance, architecture discipline, and FDA posture. Read Sections 7–9 to operationalize deployment, economics, governance, and risk. Read Sections 10–11 to prevent drift and enforce minimalist execution.
02
How to Use During Execution
If momentum stalls after a pilot: go to Section 1 (ownership + conversion gates). If documents conflict (IRB, decks, contracts): go to Section 2 (posture lock). If IRB review escalates: go to Section 3 (language + sequencing + version lock). If partners question data integrity: go to Section 4 (separation + lineage + auditability).
03
How to Keep This Playbook Stable
Do not edit language casually. If you must change claims, update the Claims Boundary Box first. Treat changes as releases: version, document, and communicate.
Language Discipline Panel (Use Everywhere)
Avoid trigger phrases unless formally cleared:
  • "Diagnoses" / "Predicts disease"
  • "Autonomous" / "Replaces clinician judgment"
  • "Learns continuously in real time"
  • "Improves accuracy over time" (during a formal study)
Prefer disciplined alternatives:
  • "Decision support" / "Informational output"
  • "Clinician retains full authority"
  • "Version-locked model during evaluation"
  • "Observed for feasibility and concordance"

Doctrine: If your system requires defensive explanation, the design and/or language is wrong.
Quick Reference
12 Core Principles
Freeze posture (research vs clinical support vs commercial) for 6–12 months; speak one language.
Define ownership early: name a single institutional owner and a single workflow insertion point.
Lock the model during evaluation: no silent updates; versioned releases only.
Separate training from evaluation: maintain clean datasets, lineage, and consent scope.
Design outputs for clinical rhythm: fast, readable, consistent, and low-burden.
Keep claims narrow: FDA evaluates claims; every promise becomes a validation obligation.
Use IRB to learn, not to prove: observational, minimal-risk studies preserve speed and optionality.
Make adoption a workflow property: remove logins, clicks, and responsibility ambiguity.
Translate value into economics: prove one lever (throughput, cost avoidance, reimbursement enablement).
Governance should be boring: assign accountability, define escalation, reduce surprise.
Stop drift early: scope freeze, conversion gates, and founder-exit criteria.
Permanence beats velocity: build a stable interface between clinical reality and controlled inference.
Read Before Anything Else
Claims Boundary Box
This playbook assumes a clinical decision support posture unless explicitly stated otherwise.
This software class is intended to:
  • Provide informational outputs to assist qualified clinicians in reviewing clinical data within established workflows.
  • Support triage and review without changing standard of care.
This software class is not intended to:
  • Diagnose disease or determine treatment.
  • Provide autonomous decisions.
  • Replace or override clinician judgment.

Non-negotiable boundary: Clinicians retain full authority and responsibility for interpretation and action.
Evidence Ladder (How Claims Expand Obligations)
Use this to keep language aligned with evidence maturity.
Post‑Market Monitoring Posture (Minimal, Defensible)
Even early deployments benefit from a simple monitoring posture that signals maturity without over-committing.
  • Monitor usage, adoption, and incident signals.
  • Detect drift through periodic review of input quality and output distributions (within approved governance).
  • Respond through controlled change: triage issues, document mitigations, and ship versioned updates.
  • Communicate changes appropriately to sites based on impact.

Rule: Monitoring is not "continuous learning." Monitoring is controlled observation with controlled change.
Section 1
The Reality Check
Healthcare AI most often fails between pilot and permanence. Teams demonstrate feasibility and early signal, then stall when asked to operate inside real institutions.
The gap exists because healthcare optimizes for harm minimization, accountability, and continuity—not novelty.
Your primary constraint is not the model. Your primary constraint is trust formation across stakeholders who carry risk.
Operational Reality: Pilots rarely end with a formal "no."
They end with silence, delay, and diffuse ownership.
Common stall points look like this:
Legal Limbo
Legal requests clarifications you never wrote (data rights, indemnity, incident response).
Compliance Questions
Compliance asks whether the model changed mid-study (you cannot answer cleanly).
Operations Burden
Operations cannot justify additional steps, clicks, or minutes per patient.
Leadership Support Without Action
Clinical leadership is supportive but cannot compel daily adoption.

The pilot-to-permanence gap is not technical. It is institutional: who owns the tool, who bears risk, and who benefits enough to pay.
Failure Snapshots (What This Looks Like in the Wild)
Snapshot A — The "Forever Pilot." You complete a 90-day pilot. The site asks for "one more month" to gather more data, then repeats the request. No one can name the conversion criterion.
Snapshot B — The "Compliance Freeze." Your outputs look strong. A compliance leader asks, "Did the model change during the study?" The answer is complicated. The project pauses indefinitely.
Decision Boundaries
Proceed when:
  • A single institutional owner is named (not a committee).
  • Workflow insertion is defined and stable.
  • Model and claims are version-locked for the evaluation window.
Pause and reframe when:
  • Requests multiply without a clear decision-maker.
  • The "pilot" repeats with no conversion plan.
  • You are asked for assurances that require claims you have not validated.
Stop when:
  • There is no path to ownership or payment.
  • The site uses you as a perpetual experiment without commitment.

Micro-Templates (Copy/Paste)
Pilot conversion criterion (internal):
"This pilot converts to production when (a) an operational owner is assigned, (b) the insertion point is frozen, (c) the model version is locked for the evaluation window, and (d) procurement terms are initiated on a defined timeline."
One-sentence reality check (to leadership):
"We are not blocked by model performance; we are blocked by ownership, workflow burden, and risk assignment."
Section 2
Framing the Problem Correctly
Research, clinical support, and commercial products are not a spectrum. They are switches. Each mode tolerates different uncertainty and requires different language.
A team that mixes modes will produce contradictory documents—and institutions will protect themselves.
Teams often promise learning to researchers, reliability to clinicians, and scale to investors—simultaneously.
This shows up as conflicting statements across:
1
IRB Protocol
"observational feasibility"
vs
Pitch Deck
"diagnostic-grade"
2
Contracts
"site owns data"
vs
Product Plans
"we retrain on everything"
3
Clinical Messaging
"advisory only"
vs
Roadmap
"automation next quarter"
Institutions respond to contradictions by slowing down, expanding review, or insisting on conservative restrictions.
Lock posture for a defined window (typically 6–12 months):
  • Research posture when you must learn and can tolerate uncertainty.
  • Clinical support posture when outputs are stable and workflow insertion is defined.
  • Commercial posture when scale and repeatability matter more than exploration.

Posture statement (use everywhere for 6–12 months): "This effort is an observational evaluation of decision-support outputs within routine care. It does not change standard of care, does not provide diagnoses, and maintains clinician authority."
Section 3
IRB as a Strategic Lever (Not a Bottleneck)
The IRB is not an obstacle. It is a validator of ethical clarity under uncertainty. Used well, it legitimizes data collection, disciplines scope, and preserves regulatory optionality.
IRB work should help you observe reality—not prove product performance.
IRB reviewers rarely object to AI itself. They object to:
Language that triggers institutional fear:
  • "The AI will identify disease."
  • "Clinicians will follow AI recommendations."
  • "The system learns and improves during the study."
Language that reduces fear:
  • "Informational outputs."
  • "Displayed after clinician assessment."
  • "Model version locked during the evaluation window."
  • "Clinician retains full authority."
Micro-Templates (Copy/Paste)
Minimal-risk positioning (protocol language):
"This is a prospective observational study. The software provides informational outputs only and does not alter standard of care. Clinicians retain full authority for all decisions."
Sequencing that reduces undue influence:
"AI outputs are displayed after the clinician's routine assessment is completed, to support review without directing care."
Data change control statement:
"The model version will remain locked during the study window. Any model updates will occur only after study completion and documented change review."
Section 4
Data Strategy That Survives Scrutiny
In healthcare, data is not the asset. Governance of data is the asset.
Institutions and reviewers rarely ask for more data first. They ask whether you can control what you collect, how you use it, and what changes over time.
What partners and reviewers care about is boring and absolute:
Provenance
Where did this data come from, and under what permissions?
Separation
Is training data distinct from inference/evaluation data?
Change Control
Did the model change during formal evaluation?
Auditability
Can you reconstruct exactly what happened for a given output?
Where teams get into trouble is almost always process, not intent:
  • "Pilot data" gets reused without clearly defined secondary-use scope.
  • Training and evaluation streams drift into each other through convenience.
  • A model gets quietly updated to "fix a bug," and the study becomes uninterpretable.
  • Logging is either too thin (no audit trail) or too heavy (privacy anxiety + operational burden).

Training vs inference separation (documentation language): "Training datasets are maintained separately from inference and evaluation datasets. Evaluation outputs are generated only from versioned releases and are not used for training during the active evaluation window."
Sections 5-11
The Remaining Core Sections
The playbook continues with detailed operational guidance across:
01
AI Architecture Without Regulatory Self-Sabotage
Predictable behavior, stable output contracts, low cognitive burden, and change control that clinicians and regulators can understand.
02
FDA Pathways Without Fear
Narrow claims, stabilize intended use language, and align evidence to what is actually claimed.
03
Clinical Deployment in the Real World
Insertion points, role ownership, friction audits, and production constraints.
04
From "Study" to "Paid"
Translate measured clinical workflow value into a payment story.
05
Governance, Risk, and Quiet Competence
Accountability, escalation, and documentation tone that signals maturity.
06
The Silent Killers
Drift patterns that quietly kill healthcare AI programs—scope creep, pilot loops, and founder-as-glue dynamics.
07
The Minimalist Execution Model
Freeze claims early, ship versioned releases, enforce stable output contracts, and design deployments that do not depend on heroics.
Closing Doctrine
Healthcare AI succeeds when it respects institutions, reduces burden, communicates honestly, and moves deliberately.
Resistance usually indicates that trust has not yet formed.
Slow down strategically. Move forward permanently.

Cerebro NeuroTech, Inc. 2025
This document is licensed for internal strategic planning, educational use, and advisory reference only. This material is provided for informational purposes only and does not constitute legal, medical, or regulatory advice.