
We’re entering a new era in healthcare and medical education—one where AI is increasingly integrated into tasks such as clinical reasoning and decision support. To support clinicians and educators in using AI responsibly, we developed the SAFE approach:
S: Set Boundaries: Define what AI can and cannot do in your clinical context.
1. No final diagnoses, prescriptions, or critical decisions.
2. Use for drafts, brainstorming, summaries, or language simplification.
3. Establish clinical red lines where human judgment is non-negotiable.
A: Add Friction: Prevent blind reliance with built-in checks.
1. Label all AI-generated text: “Machine-generated. Verify before use.”
2. Require human review before integrating into EMRs.
3. Add pop-up reminders or checklists before clinical use.
F: Foster Reflection: Create space for metacognition.
1. Debrief as a team: “Where did AI help us? Where did it overreach?”
2. Reflect not just on workflow, but on thinking habits.
3. Make critical thinking part of every AI-assisted task.
E: Educate Clinicians: Build AI literacy as a core clinical skill.
1. Explain how language models work—and where they fail.
2. Share real stories of hallucinations, bias, and misfires.
3. Use vignettes to spark ethical reflection and dialogue.
As this technology evolves, it’s crucial to recognize that AI is less about being “artificial” or “intelligent,” and more about automation and translation. It does not replace human judgment—it requires it.
