With the release of Chat GPT in November 2022, generative AI literally exploded into public consciousness. We see innovations and applications for nearly every major industry-with many businesses either already incorporating generative AI into their operations or planning on doing so. And while there is certainly plenty of hype, there have already been real and dramatic shifts in many industries as a result.
The applications appear endless, from text generation and editing to writing code and debugging software. Generative AI has the potential not only to find and present information in a user-friendly way, but also to generate answers to questions that have never been asked. Creativity and innovation have always been uniquely human-but now technology, and generative AI in particular, is poised to fundamentally change the way that work is done.
Take, for example, how generative AI can handle writing tasks. Within seconds you can create large detailed text exploring an esoteric topic, or distill complex basic science research papers to simple summaries. These tasks could very well have taken a human hours of research and time spent editing drafts. Imagine the bulk of this work being done by technology with human review before completion.
Medicine and healthcare are not immune from these effects. The applications are broad-from research and education to front line clinical care. Some LLM’s (Large language models) have even passed the USMLE exams. There is the ability to accelerate drug development and discovery or deliver precision medicine therapeutics. Many are touting the potential to revolutionize healthcare as we know it and a myriad of health AI startups have burst into existence on this very premise.
At the same time there are very real and concrete concerns. The stakes in healthcare are high, and there is no margin for error. While it might be embarrassing to make an error in a research paper, or vaguely disturbing to have a chatbot insult you-in medicine there is the very real danger of causing bodily harm.
Bias in training of LLM’s can propagate disparities and discrimination. The technology today is also limited by consistency and the disturbing tendency to produce very different outputs with exactly the same inputs. Medicine has always focused on reliability and predictability. Now the very tool that holds the promise of democratizing healthcare could in fact be doing the opposite. And then there is the real potential to cause harm with hallucinations and errors.
But, to simply deny the use in healthcare is a huge potential disservice to patients and clinicians alike. The potential value is tremendous. And the reality is, despite widely documented concerns, many patients and clinicians are using tools like Chat GPT for medical use cases-whether or not it’s been established for this use, or even safe (Chat GPT is not HIPAA compliant).
We need to appreciate the limitations of generative AI to fully realize the benefits while at the same time mitigating the risks. While AI does not recognize cause and effect-it can certainly establish correlations and connections. The human role is to establish the how and why. Understanding the limitation of technology and using that to guide appropriate use is as important as the underlying technology itself. The goal should be to augment the abilities of clinicians, not replace them.
AI doesn’t have common sense or intuition. One of the first key skills I press my house staff to learn is how to spot “sick” from “not sick” when triaging patients. While models can attempt to predict patients at risk of decompensation, it’s no replacement for the eyes of a trained clinician. I think you would be hard pressed to find a clinician who would disagree with me on that.
Generative AI in clinical medicine today is best thought of as augmenting the clinician. It is best applied to tasks that the clinician undertakes, but that don’t require the skill level of that provider. The power is in allowing the physician to focus on those tasks that do require their skill level-be it performing procedures, interpreting complex diagnostics, putting the whole picture together in a coherent diagnosis and plan, or providing the human touch for patients. The value of generative AI as it exists today is in allowing the clinician to focus their attention and mental energy on the how and the why of caring for patients.
Visit www.suki.ai to learn more about how Suki’s AI-powered, enterprise-grade voice assistant helps tackle administrative burden for clinicians.
Suki Assistant Received 93.2 Overall Performance Score in 2024 KLAS Spotlight Report - get your copy now.
About the Author
Dr. Herprit Mahal is a Medical Director at Suki and a practicing Hospitalist. She is passionate about using technology to bring quality, evidence-based care to her patients. In her clinical role, Dr. Mahal has worked extensively in the Perioperative Medicine Clinic and led diabetes quality initiatives. At Suki she hopes to use technology to support high-quality medical outcomes.