Business US

Patients are consulting AI. Doctors should, too AI needs to be part of doctor training

Ask most physicians today, and they’ll describe some version of this scene: In the middle of an appointment, a patient says, “I asked ChatGPT about the treatment you recommended.” 

A few years ago, doctors might have bristled. Today, this is the new reality. And yet, it’s exactly what tens of thousands of medical students and residents applying to programs this fall have been forbidden to do.

As an academic physician and a medical school professor, I watch schools and health systems around the country wrestle with an uncomfortable truth: Health care is training doctors for a world that no longer exists. There are some forward-thinking institutions. At Dartmouth’s Geisel School of Medicine, we’re building artificial intelligence literacy into clinical training. Harvard Medical School offers a Ph.D. track in AI Medicine. But all of us must move faster.

The numbers illustrate the problem. Every day, hundreds of medical studies appear in oncology alone. The volume across all specialties has become impossible for any individual to absorb. Within a decade, clinicians who treat patients without consulting validated, clinically appropriate AI tools will find their decisions increasingly difficult to defend in malpractice proceedings. The gap between what one person can know and what medicine collectively knows has grown too wide to bridge alone.

Our patients aren’t waiting. They have already consulted ChatGPT or other AI chatbots before they arrive at appointments. They ask questions that assume their physician has considered options that the doctor has never encountered. A colleague in Boston recently told me about a patient who had turned to a chatbot that raised three treatment options the doctor hadn’t initially considered. They spent 20 minutes working through the alternatives together. When the doctor explained his recommendation to his patient, he noticed her hands trembling. The AI had given her information, but he gave her reassurance. The AI outlined probabilities. He held space for her fear.

STAT Plus: Every doctor needs a basic level of AI literacy

Several months later, she was in remission. The AI had helped her advocate for herself. But only the doctor could answer what she really wanted to know: “Am I going to be OK?”

This is the future of medicine: AI as consultant, not replacement. But some of our medical schools and health systems seem determined to prepare students for the past. Some schools have restricted AI use for coursework and clinical write-ups.  The Association of American Medical Colleges limits the use of AI in residency applications; students and trainees often complain that they need to be competent in AI even as they’re told they can’t use it. The instinct to restrain new technology is understandable. It’s also obsolete. What students need instead are more useful institutional requirements.

First, AI verification protocols. Medical schools already run morbidity and mortality conferences where physicians review cases that went wrong. We need AI rounds where students present: Which model did you consult? What did it recommend? Where did you override it and why? This should be a standard part of clinical training, documented in the medical record, reviewed by attending physicians.

Second, transparency standards. The Accreditation Council for Graduate Medical Education (ACGME) should require residents to document AI consultation the way we document any specialist consultation. What question was asked? What answer was returned? What clinical judgment led to the final decision? This creates an auditable trail and teaches habits that will define their careers.

Third, competency assessments. Medical licensing boards should test AI literacy the way they test pharmacology. Which models have been validated for which clinical questions? What are the known error rates? When should an algorithm be trusted, and when should it be questioned? These aren’t theoretical questions. They’re the foundation of every treatment decision that learners will make.

Finally, patient consent frameworks. When AI informs clinical decisions, patients deserve to know. Not because the technology is inherently experimental, but because transparency is part of partnership and many deployments are still being evaluated for safety, privacy, and effectiveness. Students need practice with these kinds of conversations: “I consulted a clinical decision support tool that analyzes thousands of similar cases. Here’s what it suggested, and here’s why I agree or disagree.”

This matters most where American medicine is failing. At Dartmouth Health, we serve rural New Hampshire, Vermont, and Maine, where specialist shortages are severe in geriatrics, palliative care, and mental health. This fall, Geisel launched an AI curriculum that begins the moment students arrive, because we recognized a critical truth: If medical schools don’t guide how students think about and use these tools, technology companies will drive both the curriculum and clinical practice. We’re uniquely positioned to show how AI can bridge impossible gaps in underserved areas by training the first generation of clinicians who master the technology, rather than fear it.

Through my work and research in end-of-life care, I’ve held dying patients’ hands, embraced families in their final moments, and sat in silence when silence was the only honest response. No algorithm can do this work. But AI can make us smarter and more effective. The stethoscope isn’t obsolete. Neither is the held hand. They’re irreplaceable.

Right now, thousands of students and residents are interviewing at medical schools and training programs nationwide. To those learners: Ask about AI training. Ask how your education will prepare you for patients who arrive with AI-generated questions. Ask about clinical decision support tools. Ask how you’ll learn to be the doctor that AI cannot replace: the one who holds the hand, interprets the fear, and answers the question beneath the question.

To my colleagues in academic medicine: The ACGME should mandate AI competency standards by 2026. Medical licensing boards should add AI literacy to board exams within two years. Schools should replace their bans or restraints on AI with AI protocols. Our students deserve training for the medicine they’ll practice, not the medicine we remember.

To patients (in other words, all of us): The next time you pull out your phone and mention ChatGPT, your doctor should have a better answer than silence. Ask them: Have you consulted AI tools for my diagnosis? Were you trained to use them? If the answer is no, ask why not.

The choice isn’t between human doctors and artificial intelligence. It’s between doctors who use all available tools — technological and human — to serve their patients, and doctors who face an impossible task alone. We know the future our patients need. It’s time our medical schools and health systems caught up.

Angelo Volandes is a professor at Dartmouth’s Geisel School of Medicine, a clinician-investigator, and vice chair of research for the department of medicine at Dartmouth Health.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button