If a physician uses an AI system and the technology steers that physician in the wrong direction, it's unclear who has to compensate the patient, Politico reported Nov. 30.
Maulik Purohit, MD, chief health information officer of Allentown, Pa.-based Lehigh Valley Health Network, raised the question to the news outlet, if an AI system makes a mistake, "is that the responsibility of the technology itself?" or "is that the responsibility of the user?"
Currently there is no answer to those questions, but the American Hospital Association recently said it will continue to advocate for limitations on physician liability related to the use of AI-enabled technologies.
In the meantime, Dr. Purohit recommends AI be looked at as a "GPS system" for physicians, meaning they should be able to assist them with data, but not make or dictate decisions.
This sentiment is shared among many hospital and health system leaders who say they are proceeding with caution when it comes to the technology.
"We are approaching this evolving technology with cautious optimism," UPMC CIO Ed McCallister told Becker's. "Recently, we published an AI policy and standard that governs how our employees can ethically and safely use AI, with a focus on automation and analytics."