The potential healthcare privacy risks of ChatGPT

Hospitals and health systems whose providers use ChatGPT could be opening themselves up to HIPAA violations and lawsuits if they are not careful with patient data, two health policy experts wrote in JAMA.

Clinicians who employ the artificial intelligence chatbot in their practice are sharing the data with its developer, OpenAI, so they have to be sure they do not input protected health information, according to the July 6 article by Genevieve Kanter, PhD, an associate professor of health policy at Los Angeles-based University of Southern California, and Eric Packel, a healthcare privacy and compliance attorney with law firm BakerHostetler.

"This is harder than it sounds," they wrote. "Transcripts of encounters can be sprinkled with benign comments such as 'Not to worry, Mr. Lincoln. It's just a flesh wound, and we'll fix you right up' that are considered PHI. Casual references to a person’s residence in this context ('How's your new place in Springfield?') are also PHI. Patient names, including nicknames, references to geographic information smaller than a state, and admission and discharge dates, to cite a few examples, must be scrubbed before transcripts can be fed into the chat tool."

The two experts recommended that health systems train staffers on the risks of chatbots, including as part of their annual HIPAA training.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Articles We Think You'll Like

 

Featured Whitepapers

Featured Webinars