If artificial intelligence goes all Terminator on us and destroys humanity, don't say health system leaders didn't try.
While digital execs told Becker's that such "bad science fiction" fears about AI are overblown, they still feel a responsibility to promote safe AI given that healthcare is expected to be one of the top users of the technology.
"AI is the single biggest technological advancement, maybe ever, at least in my 30 years in technology. Its ability to impact positive outcomes for our patients or caregivers is unprecedented," said B.J. Moore, CIO of Renton, Wash.-based Providence. "So it's got to be safe. We obviously don't want patients to get hurt. We don't want our credibility to get hurt. We don't want the technology's credibility to get hurt."
To that end, his health system built its own internal generative AI platform, ProvidenceChat, that tracks the questions employees are posing. "If they use ChatGPT, we have no way to know if they're asking it to build nuclear bombs or using it responsibly," Mr. Moore said.
Karandeep Singh, MD, chief health AI officer of UC San Diego Health, said the potential dangers of AI fall into four buckets: increasing inequities and biases; opening us up to more cyberattacks and data breaches; replacing human workers (such as medical scribes); and automating "mission critical" tasks such as military operations and nuclear infrastructure.
Healthcare's conservative nature could help restrain AI as a whole, he said. Medical associations, for instance, have been coming out with statements effectively opposing AI automation in healthcare.
"The really big fears largely rely on us automating some key aspect of human judgment to AI technology. In healthcare, that goes against the Hippocratic Oath," Dr. Singh said. "So I think you're going to really see an avoidance of delegating judgments to AI, except maybe in situations where the harms are really minimal or not really there at all, like smart scheduling or making sure our ORs are used to their maximum capacity."
John Halamka, MD, president of Rochester, Minn.-based Mayo Clinic Platform, recently returned from the World Economic Forum in Davos, Switzerland, where he said AI was the biggest topic of conversation, over even climate change and the conflicts in Gaza and Ukraine.
However, no one brought up any apocalyptic warnings about AI, what he called "bad science fiction where you start with the AI and, before you know it, the robots are taking over the humans." Rather, the biggest fear — less cinematic though still important — is that algorithms will cause harm because of bias and being developed with incomplete datasets.
Dr. Halamka contends that healthcare's use of AI is making everyone more careful about the technology.
"Ask your favorite tech CEO: 'Would you use generative AI today to diagnose your family's medical conditions and treat them?' And they will say, 'That's just not an appropriate use case right now,'" he said. "The notion of what the risks could be of using generative AI in healthcare is causing every industry leader to be reflective."
Nigam Shah, PhD, chief data scientist at Palo Alto, Calif.-based Stanford Health Care, said AI is unlikely to be given enough autonomy that it wrests control of a hospital and starts operating on patients. He also believes health system leaders' responsibility is to ensure AI is used safely within healthcare.
"We do not build these systems, and it is a bit out of scope to claim to ensure safety for society writ large," Dr. Shah said. "We — healthcare systems — can certainly be thoughtful buyers and only work with technology vendors that commit to building safe AI systems."
Crystal Broj, chief digital transformation officer of Charleston, S.C.-based MUSC Health, said AI should be looked at as another member of the care team — one that will always need its work reviewed by a senior (human) staffer.
"By distinguishing between realistic and speculative concerns about AI, we can focus on practical risk mitigation and contribute to the responsible advancement of AI in society," she said.
AI also brings an opportunity for healthcare to finally prove its technological worth, and influence the direction of a potentially world-altering tool, according to Mr. Moore.
"Healthcare has missed every wave of technology," he said. "We're always the laggards. We're always 15 or 20 years behind. I would love, in five years from now, not only do we see the positive impacts of AI — I would love other industries to look to healthcare for the first time ever and say, 'Wow, healthcare really transformed with AI. How can we learn from them?'"