As hospitals and health systems begin to experiment and pilot generative AI such as ChatGPT, many CIOs and IT leaders said CIOs must be the ones to develop policies around appropriate use cases and must evaluate frameworks and regulations to stay on top of industry standards on generative AI.
Becker's asked four IT leaders: Should health systems begin to regulate the use of ChatGPT?
Robert Eardley. CIO of University Hospitals (Cleveland): Generative AI capabilities such as ChatGPT that are embedded into core computer applications should follow a governance process to other net-new technologies.
Organizations should acknowledge the importance that any "drafted" responses are subject to human review and approval. Most important early on is to be aware of and inventory the use of any generative AI explorations within the organization. Automated responses based upon generative AI should be tightly reviewed and approved for use within that context. Automated generative AI capabilities should undergo a stringent review process for accuracy in ways that other technologies have been embedded into an organization’s workflow over time (such as drug-drug interaction alerts to prevent medication errors).
Darrell Bodnar. CIO of North Country Healthcare (Whitefield, N.H.): I think that CIOs must take a position and develop policies around the appropriate use of ChatGPT and all AI language models and services.
North Country Healthcare has already taken a position and provided a detailed policy and framework for the use and adoption of ChatGPT "like" models.
Guidelines and regulated access need to be documented around all use, including clinical and nonclinical scenarios and the potential to share [protected health information] strictly defined. There also needs to be consideration for relying upon any such services for guidance and decision-making and the liabilities that accompany the process.
There is no doubt that AI language models like ChatGPT are going to revolutionize the way we all work. The potential benefits in healthcare at a time when labor markets are stretched so thin in every service and vertical are tremendous; we just need to proceed with the same caution we embrace any new technology.
Sunil Dadlani. Executive Vice President and Chief Information and Digital Transformation Officer of Atlantic Health System (Morristown, N.J.): The use of ChatGPT and similar technologies is rapidly expanding. But before widely adopting them across a healthcare organization, there are several steps CIOs must take to ensure they are integrated in productive and secure ways.
Safely adopting any new technology is tied directly to a solid understanding of the regulatory landscape, particularly when it comes to the governing rules around data security including HIPAA and GDPR. Additionally because healthcare technology environments are interconnected, leaders must have full visibility into the access and usage agreements in place with third-party vendors and others to ensure data protection. When possible, try to avoid the danger of "building the car while driving it" and have proper policies and guidelines governing the use of generative artificial intelligence and machine learning technology in place.
Educating team members about the capabilities and risks associated with these technologies will help ensure they are used properly. Always be ready to conduct risk assessments and perform continuous monitoring and evaluations.
Maintaining cross-functional internal and external partnerships will also help you gain those insights more comprehensively.
Finally, make sure you are staying current with industry advancements, regulations, compliance and ethical frameworks. This field is evolving quickly, and staying a step ahead will help avoid mistakes in the future.
Paul Conocenti. CIO of Montage Health (Monterey, Calif.): At Montage Health, we are closely monitoring this new technology. We are cautiously optimistic of its value and equally concerned about its misuse and accuracy. Before moving any technology into our production environment, the use case must be validated and monitored for accuracy and security compliance — ChatGPT is no exception!