Op-Ed: AI Medical Advice Is Missing Something Crucial

Artificial intelligence can give medical pointers, but it’s flunking the art of actually knowing a patient, writes Dr. Danielle Ofri in a New York Times opinion piece, arguing there’s no substitution for the doctor-patient relationship. The New York primary care physician describes instantly sensing that something was off with a longtime patient—not because of vital signs or statistical averages of people with his specific list of conditions, but from the way he breathed and the look on his face. That kind of pattern recognition, built over decades of relationship and context, is exactly what today’s medical AI can’t touch, she writes.
Ofri says the technology is useful for synthesizing data, suggesting diagnoses, and cranking out insurance appeal letters. The problem is that it ignores the multidimensional person. In the case of Ofri’s patient, AI could have no idea that a family crisis had changed his eating habits, likely hampering his kidney function, she writes. Arguing that there remains “an ocean of distance between the ‘patient’ that AI is analyzing and the patient that the human doctor or nurse is assessing,” Ofri says future clinicians will need at least as much training in the medical humanities as in algorithms, because medicine is less about retrieving the “right” answer than navigating a specific person’s story. For her full case on why AI will remain a tool, not the doctor, read her essay in the Times.




