Quick Reference: Clinical AI Checklists

This appendix collects the key checklists from throughout the book for easy reference. Print this section or keep it accessible when evaluating, deploying, or using clinical AI systems.

The Clinician’s Checklist

From Chapter 18: Interpretability & Uncertainty

Before trusting an AI tool’s output, ask yourself:


Pre-Deployment Checklist

From Chapter 16: AI Ops & Deployment

Before any clinical AI system goes live, verify:

Performance Validation

Infrastructure

Monitoring

Governance

Compliance

Go/No-Go Decision


Field Guide Checklist

From Chapter 22: Writing the Field Guide

Every clinical AI field guide should include:


Model Evaluation Checklist

From Chapters 6 and 8

When evaluating a clinical AI model:

Data Quality

Performance Metrics

Generalization

Clinical Validity


Bias Audit Checklist

From Chapter 20: Fairness, Bias & Health Equity

When auditing a clinical AI system for bias:

Data Representation

Performance Parity

Mitigation

Deployment Safeguards


Regulatory Pathway Checklist

From Chapter 21: Regulation & Ethics

When preparing a clinical AI system for regulatory submission:

Classification (US FDA)

Classification (EU MDR)

Documentation

Post-Market


SaMD Classification Quick Reference

From Chapter 21

Is it SaMD? Ask: Does the software provide information used for clinical decisions?

System Type SaMD? Typical Class
Administrative scheduling No N/A
EHR data storage No N/A
Diagnosis support (human in loop) Yes Class II
Diagnosis autonomous Yes Class III
Treatment recommendation Yes Class II/III
Triage/prioritization Yes Class II
Risk prediction Yes Class II

Key question: What is the intended use? The same algorithm with different intended uses may have different classifications.


Quick Ethics Framework

From Chapter 21

When facing an ethical dilemma in clinical AI:

  1. Identify stakeholders: Who is affected? (Patients, clinicians, institution, society)

  2. Clarify the dilemma: What values are in tension?

  3. Gather facts: What do we actually know about impacts?

  4. Consider principles:

    • Beneficence (does it help?)
    • Non-maleficence (does it harm?)
    • Autonomy (are choices respected?)
    • Justice (are benefits/burdens fairly distributed?)
    • Transparency (do stakeholders understand?)
    • Accountability (who is responsible?)
  5. Explore options: What alternatives exist?

  6. Evaluate tradeoffs: Who bears costs and benefits?

  7. Decide and document: Make a reasoned decision, record the reasoning

  8. Monitor and revise: Track outcomes, adjust as needed


Emergency Reference: When AI Fails

If you suspect a clinical AI system is providing incorrect outputs:

Immediate Actions

  1. Do not rely on the output for the current patient
  2. Use clinical judgment and standard-of-care protocols
  3. Document the AI output and your clinical decision

Reporting

  1. File a safety report through your institution’s system
  2. Notify the AI system owner (see system documentation)
  3. Include: Patient context (deidentified), AI output, your assessment, why you suspect error

If Widespread Problem Suspected

  1. Contact the responsible AI governance committee
  2. Request temporary suspension pending investigation
  3. Alert colleagues who may be affected

Remember

  • AI failures may affect many patients simultaneously
  • Your report may protect other patients
  • Even uncertain concerns should be reported