Hospitals and health systems can help alleviate patients' apprehension with AI by discussing the due diligence and care exercised by their organizations in selecting AI tools and evaluating its role in patient care, hospital and health system CIOs told Becker's.
A Feb. 22 study conducted by Pew Research Center found that 60 percent of Americans felt uncomfortable when their physicians used AI to diagnose diseases or recommend treatments.
Brad Reimer, CIO at Sioux Falls, S.D.-based Sanford Health suggested that one way to combat this is to educate caregivers at hospitals and health systems on artificial intelligence.
"At Sanford Health, we've been working closely with one of our academic partners to develop a 'demystifying AI in healthcare' education series for our caregivers," Mr. Reimer told Becker's. "Eventually, our goal is to provide continuing education credits or a certification for caregivers who complete this program, which we believe is foundational from a provider and patient perspective."
Mr. Reimer said that caregivers need to understand AI basics including how models are trained, how to determine if a model is relevant for the patient they are seeing and whether the AI recommendation is contextual or binary.
This knowledge, according to Mr. Reimer, will equip providers not only in their clinical decision-making but also allow them to have trusted conversations with their patients about the benefits of AI and why it's safe.
"Broader public awareness campaigns focused on 'facts over fear' would also help to cultivate and strengthen trust in AI with patients," Mr. Reimer said.
Stick to FDA guidelines
Simon Linwood, MD, CIO of Riverside, Calif.-based UCR Health said another way to make patients more comfortable with AI's use in the clinical setting is to follow the FDA's guidelines on the recommendations for AI's development and use.
On Sept. 28, the FDA released new guidance on the use of AI-driven clinical decision support tools. The guidance provides suggestions on how to remain transparent, explainable, and validate and monitor these tools so that they are easily understandable by healthcare providers and patients.
"By following the FDA's guidelines, healthcare providers can build trust with patients and improve patient satisfaction regarding the use of AI in healthcare," said Dr. Linwood.
Avoid the technical jargon
"When explaining AI's involvement in healthcare to patients, it is important to explain AI in a clear, concise and patient-centered way by using simple language and avoiding technical jargon," said Zafar Chaudry, MD, CIO and chief digital officer at Seattle Children's.
If healthcare organizations start with the basics such as explaining what AI is and how it works, as well as its benefits, such as how it can improve diagnosis, treatment and care delivery, patients can become more comfortable with the concept, according to Dr. Zafar.
But, acknowledging its concerns must also be a priority.
"Acknowledge concerns and explain how AI is intended to complement, not replace, the patient-provider experience," said Dr. Zafar. "Also discuss the measures in place to protect patient data."
Don't leave patients to wonder
Patients reasonably educated on AI are correct to be suspicious, and it falls on systems purporting to use AI to make the proper effort to educate patients on exactly how this new and rapidly evolving technology is being used at their facilities, Randy Davis, vice president and CIO of Sterling, Ill.-based CGH Medical Center told Becker's.
"We should not leave patients to wonder," said Mr. Davis. "Health systems need to create the narrative, or the narrative will be created for them."