In a little more than two months since its launch, ChatGPT passed the U.S. Medical Licensing Exam and prompted major scientific journals to ban or restrict its use in research. Now, hospital and health system leaders are trying to determine where the technology could be most helpful and where it may cause harm.
The artificial intelligence-powered chatbot is being touted as a tool that could "transform" healthcare, as it is said to be capable of mirroring intuitive human conversation. According to OpenAI, the tool's creator, ChatGPT works by learning from human feedback. The tool is also said to be able to answer follow-up questions, admit its own mistakes, challenge incorrect premises and reject inappropriate requests.
While the tool is still in its early stages, hospital and health system IT and physician leaders believe the technology has significant potential.
"There are rare moments when you see a new technology capability and realize the future world will never be the same and will inevitably be transformed by this technology. Like the first iPhone, this is one of those moments," Aaron Neinstein, MD, vice president of digital health at UCSF Health in San Francisco, told Becker's.
Still, there are plenty of questions surrounding how quickly the tool may start to be integrated into healthcare workflows and what its limits are.
"I don't think it is a question of 'if' but 'how fast' and 'exactly how,'" said Tony Ambrozie, senior vice president and chief digital and information officer at Coral Gables-based Baptist Health South Florida. "AI being helpful to physicians and minimizing the administrative EHR burden on physicians would be very valuable given the well-documented physician EHR burnout."
Still, he added, the technology's strategy of constantly learning from what it is being asked could lead to privacy concerns in healthcare, as it could be used to deanonomize patient data. ChatGPT also sounds authoritative even when it is wrong. "That is a major problem for critical activities — like healthcare — where the providers using such tool will need to constantly evaluate the validity, accuracy and value add of what's been suggested before acting on it," Mr. Ambrozie said.
Virtual 'assistant' for clinicians?
Several leaders referenced clinical documentation as a key opportunity for ChatGPT to improve workflows.
"The ability to exponentially extend a person's reach and productivity is a truly exciting development," said Tom Barnett, chief information and digital officer of Memphis, Tenn.-based Baptist Memorial Health Care. "For the physician, as an example, the capability to use this type of generative AI to keep close tabs on an entire patient population as well as summarize individual encounter notes all while simultaneously cross-examining most academic literature and research studies to cite within visit documentation in near real time is just the type of major accelerator healthcare could benefit from."
Darrell Bodnar, CIO of Whitefield, N.H.-based North Country Healthcare, who tested the tool, said he was impressed by its ability to be a "digital assistant" for healthcare professionals.
"The possibilities seem endless when looking at a beneficial digital assistant in almost any portion of the healthcare patient journey," he said. "From the very beginning of the journey, ChatGPT could assist with patient education, appointment scheduling, documentation and general navigation. Prior to the visit, prior authorization, appointment confirmations and healthcare summaries could be prepared and delivered."
The tool could also help clinicians order tests, provide clinical decision support, and produce discharge instructions and follow-up, Mr. Brodnar said.
"After interacting with ChatGPT, I have never been more convinced that AI will not replace clinicians but will instead prevent many from leaving the profession," said Michael Hasselberg, PhD, RN, chief digital health officer of University of Rochester (N.Y.) Medical Center.
Providers are suffering from "electronic data overload" amid the proliferation of patient portals and other third-party digital health apps, so AI chatbots can alleviate some of this burden, he said. He imagines a day where the technology will work alongside clinicians as a "virtual care assistant."
"It will take on the administrative tasks such as providing responses to prior authorizations and insurance claim denials, which no clinical provider enjoys doing," Dr. Hasselberg said. "It will enhance clinical decision-making by summarizing patient records and extracting the pertinent information that is needed to meet the patient's care needs in the moment."
Still, he said, AI is only as good as the data it employs, so that information will have to be accurate, up to date and unbiased for the technology to be used in a field as important as healthcare. As it stands, ChatGPT has limited knowledge on anything after 2021, so some of its ideas may be outdated or incorrect.
'The future of the delivery of medicine'
Jacksonville, Fla.-based Baptist Health has deployed an enterprise AI chatbot for password reset calls that has cut the inquiries from about nine minutes to a few seconds, said Aaron Miri, the health system's senior vice president and chief digital and information officer. He said the health system "affectionately" named its platform BELLE (Baptist Enterprise Linguistic Learning Environment).
"The ROI is crystal clear of the sheer power of AI chatbots," he said. "AI chatbots are the future of the delivery of medicine. Just like automation has made flying airplanes significantly safer, so too will AI chatbots."
Other possible ChatGPT uses for hospitals and health systems include writing rough drafts of patient education content; quickly summarizing lengthly medical records (with a HIPAA-compliant version of ChatGPT); and near- or real-time translation services, said Patrick Woodard, MD, chief healthcare information officer of Rapid City, S.D.-based Monument Health.
"The key, at least for now, will be to ensure that humans still review the work," he said. "ChatGPT is still learning, and like any learner, still needs some oversight."
Technology still novel, could pose risks
According to Zafar Chaudry, MD, senior vice president, chief digital officer and CIO of Seattle Children's, the tool's "newness" should warrant caution as ChatGPT could have security risks.
"This is new technology and in most cases is a 'free' technology," he said. "So it also stands to reason that we really need to proceed cautiously and understand exactly how this technology works, determine how accurate is it truly is, and verify from both a privacy and security perspective just what the implications of this 'free' technology could mean for patient and organizational data and what privacy protections and recourse should exist for healthcare organizations."
Recently, The New York Times reported that when tested, ChatGPT could offer only limited support for languages other than English and that it could not identify political material, spam, deception or malware. ChatGPT also warns its users that it "may occasionally produce harmful instructions or biased content."
Dr. Chaudry said healthcare needs to remain cautious about the tool's potential risk in generating false or inaccurate information.
"But the risk can be significant due to the potential to generate inaccurate or false information. Therefore, its use in clinical medicine will require greater caution with lots of clinical collaboration and input into the specific clinical use cases."
Andy Chu, senior vice president of product and technology for Renton, Wash.-based Providence, said the industry still is a "long way off from the days of Big Hero 6,'" the 2014 Disney film that features a healthcare robot. But he said he envisions this technology being used to answer patient-related administrative, "decision-tree" or general health-education questions.
"However, it's important to remember that healthcare is very personal, and generative AI technologies are as good as the data accessed," he said. "Data sources within healthcare are rightfully secure and protected, so we must be careful not to overhype the promise — especially when it comes to areas such as symptom detections, triage, clinical treatment suggestions."
As John Halamka, MD, president of the Mayo Clinic Platform, put it in a Feb. 1 blog post, AI will need to be governed by policies and regulations. The Coalition for Health AI, of which Rochester, Minn.-based Mayo Clinic is a member, is one group working to develop those guidelines.
"ChatGPT takes artificial intelligence into a new realm, one that can create real value and palpable harm," Dr. Halamka wrote. "But we don't believe in artificial intelligence, we believe in augmented intelligence — that is to say, as humans, giving us a statistical analysis of data of the past to help us make decisions in the future is wonderful."