As we enter the mid-2020s, we find ourselves at a crossroads. Artificial intelligence is becoming more and more common, and its abilities are expanding at an exciting (or alarming, depending upon your point of view) rate. Soon, we will turn AI loose, and armed with hundreds of millions of patient charts to learn from, it will effortlessly out-diagnose and out-treat every one of us.
Even now, predictive models can sound alarms before all, but the most vigilant human clinicians know something is wrong. To be better, all the models need is more information. Cameras and sensors in hospital and clinic rooms. Wearable monitors. We can feed them the data they crave.
And what's next? The days are coming when humans won't even need to program the predictive models. AI will discover trends in the data that have escaped us. And it's at that point that a modern society won't need doctors anymore. At least not anyone who can't perform surgery better than a robot. I know what you're thinking — that patients will always need empathy. That's true, but people have already rated chatbot responses to patient messages as more empathetic than provider responses. Fine. They still need a shoulder to cry on … but no one's going to pay someone a physician's salary for that.
There it is. Just over the horizon. The end of doctors.
While that future may come to pass, there's another way forward, one in which we live in harmony with the computers, one in which we embrace what they can do for us, and one in which our patients demand that we remain involved in their care. I don't believe patients want to be diagnosed by a computer. I believe they will always crave a human connection when their health is involved or when their lives are at stake. It takes a human to know a human.
The first step toward this future is unburdening us from the menial tasks that slow us down. I don't want to write notes anymore, and I don't think I'm alone. I want the computer to listen to what my patients say and what I say back, I want it to track what I do in the electronic health record, and then I want it to look back and learn from the tens of thousands patients I've cared for through the years and generate a note with my style and in my voice. And, while that large language model is at it, I want to sic it on my inbox and have it draft replies that I can review for accuracy and appropriateness before sending. I want it to summarize years of notes into one document that I can digest in a few minutes. All of that will free up more time to spend with patients, create connections and build relationships.
I want it to compile all a patient's historical data and give me a differential diagnosis sorted on probability of disease complete with a "why or why not" rationale for each potential diagnosis. Let me talk with my patients and bridge the gap between what the data are telling me and what those patients are living. I want it to do retrospective clinical research on the spot, show me which treatments are advised and why, and then give me the option to choose. Because despite the appellations given them by well-intentioned humans, AI and LLMs don't actually know anything. They can (and do) make "mistakes" by using faulty data. These so-called hallucinations still need a human to strike them from records and make sure they aren't perpetuated.
This version of the future will require us to understand how the emerging technologies of AI and LLMs work, what makes them valuable, and what makes them dangerous on both practical and philosophical levels. We physicians will need to think digitally and be able to critique diagnostic algorithms and predictive models. We will need to acquire a new lexicon — informatics — and add it to the languages of medicine and humanity that we already speak. And perhaps most importantly, we will need to be able to translate that ocean of digitized information into words and sentences our patients and their loved ones are able to understand and respond to.