How far is too far for AI in healthcare?

As artificial intelligence begins to proliferate in healthcare, health system digital leaders told Becker's it will only ever be able to do so much.

"There is a humanity to healthcare that zeros and ones can never replace," said Jeffrey Ferranti, MD, senior vice president and chief digital officer of Durham, N.C.-based Duke Health. "AI can look at pixels in an X-ray. AI can categorize patients by risk. AI can even help doctors become better doctors. But computers cannot comfort families, personalize care decisions, or remind patients that nothing is black or white and hope is the most powerful medicine."

As AI chatbots like ChatGPT have started giving medical advice and health systems employ machine learning and big data to improve their patient populations, a majority of Americans say they're uncomfortable with AI being used in their medical care. So how far is too far?

"AI systems should never be used to make critical clinical decisions without human oversight," said Christopher Longhurst, MD, chief medical officer and chief digital officer of UC San Diego Health. "Even with a high degree of accuracy, there is always a risk of error, and human input can ensure that AI-based recommendations are made with ethical and moral considerations."

Ideally, Dr. Longhurst said, next-generation AI such as large language models will free up more time for clinicians to deliver even more "humanistic and empathetic care, which AI will never replace."

"I think of healthcare AI as 'augmented intelligence,' algorithms helping providers interpret large amounts of data so that wise human decisions can be made," said John Halamka, MD, president of Rochester, Minn.-based Mayo Clinic Platform. "I do not believe that AI should be a decision-maker for medical diagnosis. It cannot provide the empathy, respect for personal values, or ethical frameworks needed to align therapeutic options with patient needs."

Sacramento, Calif.-based Sutter Health, for instance, uses augmented intelligence to redirect 20 percent of patient portal messages away from physicians to a more appropriate care team member, and give personalized, anticipatory advice to expectant mothers, leading to a 15 percent drop in in-basket messages there, said Albert Chan, MD, chief digital health officer.

"The next frontier will most certainly provide artificial general intelligence support for common tasks, but with all deference to the potential power of various chatbots, we must be certain that the information generated does not just sound right, but actually is right," Dr. Chan said. "And most of all, I do not believe AI can ever replace the healing touch and empathy a compassionate clinician provides to our patients in their most vulnerable time of need for great healthcare."

Ashish Atreja, MD, CIO and chief digital health officer of UC Davis Health, also in Sacramento, agreed that generative AI could supplant some repetitive tasks in healthcare, creating efficiencies along the way.

"However, machines lack internal consciousness, and the ability to distinguish right from wrong," he said. "Roles that depend on human empathy and critical decision-making in ambiguous situations can be supported but should never be replaced by AI."

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Articles We Think You'll Like

 

Featured Whitepapers

Featured Webinars