The thing that makes C-suite executives most excited about the future is also making them incredibly nervous: artificial intelligence.
AI exploded in healthcare over the last 18 months as ChatGPT ushered in a new era of capabilities to automate repetitive tasks, analyze data and generate sophisticated chatbot communications. But there are risks with AI, including trained biases, information security and accuracy. Many health systems are developing their AI strategy as they go, appointing various leaders to oversee AI efforts and begin to set governance around it.
"Leading the artificial intelligence task force at Tulane is a lot of work, but also a great opportunity because it's been pushing me to look at all the facets and angles of the incredible opportunities and dangers that are associated with generative AI," Giovanni Piedimonte, MD, vice president of research, institutional official, and research integrity officer at Tulane University in New Orleans, told Becker's.
Health systems are grappling with the rapid pace of change for AI solutions. The technology has been swiftly adopted by large companies, including Epic EHR and Microsoft, as well as digital health startups, with the goal of automating administrative functions and boosting medical decision-making. Early pilots have shown positive results for health systems, but there is clear danger in relying too much on AI.
ECRI named "insufficient governance of AI medical technologies" as one of the top health tech hazards of 2024. AI-driven solutions depend on strong algorithms and accurate data for training, and shortcomings could result in inappropriate responses and potential harm.
"Complicating matters for care providers is that they have little visibility into the methodology that the application uses to reach decisions and the data on which the application was trained," the ECRI report states. "This lack of transparency makes it difficult for healthcare professionals to judge system performance for their specific patient population."
ECRI recommended health systems develop AI governance and standards, as well as oversight to assess risks and impact on patient care. AI requires continuous monitoring after implementation to ensure it stays accurate and appropriate over time.
"In a way, it feels like when J. Robert Oppenheimer was confronted with the supernatural power and unfathomable risks of nuclear energy and began understanding that the new technology could create a brand new world or destroy it," Dr. Piedimonte said. "It's been a very interesting experience, but it's just the beginning, so it's impossible to predict where this journey will take us."
A recent study from Pittsburgh-based UPMC and KLAS Research showed 65% of health systems don't have a systemwide governance policy for AI use and data access, and another 19% just have board policies covering AI. Among the 11 systems responding with AI governance policies, 55% hadn't been updated in the last year.
"I hope our team will be able to master the more productive aspects of this incredible technology while trying to avoid some of the problems," Dr. Piedimonte said.
Over the next several years, health systems will refine their policies around AI and generative AI to share appropriate use cases and remove potential biases as much as possible. Bharat Magu, MD, chief medical officer of Yuma (Ariz.) Regional Medical Center, told Becker's healthcare will undergo a big transformation over the next three years driven by technology and automation.
"Artificial intelligence is expected to assume a much more substantial role than it does today, potentially revolutionizing back-end operations, scheduling and patient access," he said. "However, it remains to be seen whether the advanced capabilities of large language models and AI, which have shown considerable potential in alleviating cognitive overload, will actually lead to a reduction in healthcare costs."