AI will revolutionize every aspect of healthcare, including drug, medical device, and medical software regulation; clinical education and more, according to three panelists at a Jan. 12 forum hosted by Philadelphia-based University of Pennsylvania's Leonard Davis Institute of Health Economics.
The panelists included I. Glenn Cohen, faculty director of Harvard Law School's Center for Health Care Law Policy, Biotechnology and Bioethics; Rangita de Silva de Alwis, director of the Global Institute for Human Rights at Penn’s Carey Law School; and Nigam Shah, PhD, Professor of Medicine and Biomedical Data Science at Stanford University.
The discussion centered around the role of AI in healthcare, potential regulatory action surrounding AI technologies, and the potential for biases in medical AI devices.
One of the key points of the discussion was the extent to which generative AI was acceptable in treating patients. The question that Mr. Cohen posed to his fellow participants was this: when ChatGPT starts giving you answers to medical questions, where do we draw the line between it serving as a physician versus a computer?
Mr. Cohen went on to discuss the regulation of AI, and how private actors could go unregulated for a long period of time. Dr. Shah spoke about the many regulatory agencies the government operates, and proposed a risk-based approach to regulation. For example, involving AI in an amputation would require more regulatory attention than an automated email.
Regulation must also concern itself with biases in the health system. If AI learns from human-produced sources, it will inevitably reflect human biases. So, de Silva de Alwis said, it could learn to poorly serve marginalized populations based upon current data.
Participants also discussed the responsibility hospitals and health systems undertake when employing AI in their operations. Thus, according to Mr. Cohen, if AI were faulted for any poor medical decisions, the system or developer (rather than the clinician) would be liable.