Quick Reference: Clinical AI Checklists
This appendix collects the key checklists from throughout the book for easy reference. Print this section or keep it accessible when evaluating, deploying, or using clinical AI systems.
The Clinician’s Checklist
From Chapter 18: Interpretability & Uncertainty
Before trusting an AI tool’s output, ask yourself:
Pre-Deployment Checklist
From Chapter 16: AI Ops & Deployment
Before any clinical AI system goes live, verify:
Performance Validation
Infrastructure
Monitoring
Governance
Compliance
Go/No-Go Decision
Field Guide Checklist
From Chapter 22: Writing the Field Guide
Every clinical AI field guide should include:
Model Evaluation Checklist
From Chapters 6 and 8
When evaluating a clinical AI model:
Data Quality
Performance Metrics
Generalization
Clinical Validity
Bias Audit Checklist
From Chapter 20: Fairness, Bias & Health Equity
When auditing a clinical AI system for bias:
Data Representation
Performance Parity
Mitigation
Deployment Safeguards
Regulatory Pathway Checklist
From Chapter 21: Regulation & Ethics
When preparing a clinical AI system for regulatory submission:
Classification (US FDA)
Classification (EU MDR)
Documentation
Post-Market
SaMD Classification Quick Reference
From Chapter 21
Is it SaMD? Ask: Does the software provide information used for clinical decisions?
| System Type | SaMD? | Typical Class |
|---|---|---|
| Administrative scheduling | No | N/A |
| EHR data storage | No | N/A |
| Diagnosis support (human in loop) | Yes | Class II |
| Diagnosis autonomous | Yes | Class III |
| Treatment recommendation | Yes | Class II/III |
| Triage/prioritization | Yes | Class II |
| Risk prediction | Yes | Class II |
Key question: What is the intended use? The same algorithm with different intended uses may have different classifications.
Quick Ethics Framework
From Chapter 21
When facing an ethical dilemma in clinical AI:
Identify stakeholders: Who is affected? (Patients, clinicians, institution, society)
Clarify the dilemma: What values are in tension?
Gather facts: What do we actually know about impacts?
Consider principles:
- Beneficence (does it help?)
- Non-maleficence (does it harm?)
- Autonomy (are choices respected?)
- Justice (are benefits/burdens fairly distributed?)
- Transparency (do stakeholders understand?)
- Accountability (who is responsible?)
Explore options: What alternatives exist?
Evaluate tradeoffs: Who bears costs and benefits?
Decide and document: Make a reasoned decision, record the reasoning
Monitor and revise: Track outcomes, adjust as needed
Emergency Reference: When AI Fails
If you suspect a clinical AI system is providing incorrect outputs:
Immediate Actions
- Do not rely on the output for the current patient
- Use clinical judgment and standard-of-care protocols
- Document the AI output and your clinical decision
Reporting
- File a safety report through your institution’s system
- Notify the AI system owner (see system documentation)
- Include: Patient context (deidentified), AI output, your assessment, why you suspect error
If Widespread Problem Suspected
- Contact the responsible AI governance committee
- Request temporary suspension pending investigation
- Alert colleagues who may be affected
Remember
- AI failures may affect many patients simultaneously
- Your report may protect other patients
- Even uncertain concerns should be reported