Critics bristle over creating MyChart messages with AI: 4 things to know

Critics are raising concerns about the use of generative AI to respond to patient MyChart messages, The New York Times reported Sept. 24.

About 15,000 physicians and assistants at more than 150 health systems are using the feature in MyChart to draft replies to patient messages. MyChart's In Basket Art tool, a version of GPT-4, pulls context from the patient's previous messages and information in the EHR to draft a message for providers to review and edit. The drafts mimic the physician's writing style and are meant as a way to message patients faster and reduce the mental burden on providers. 

Garrett Adams, a research and development leader at Epic, which created MyChart, told the Times the company has built guardrails into the program to prevent the AI from giving clinical advice and that the tool is not designed to improve clinical outcomes. However, it is not restricted to administrative tasks, either, according to the report.

But critics cited a number of issues with AI-generated messages to patients: 

1. There are no federal regulations or widely accepted ethical framework, so each health system chooses how to test a tool's safety and whether to inform patients of its use. Most systems do not disclose the use of AI to respond to patient messages, saying they had concerns that a disclaimer would be seen as an excuse to send the messages to patients without properly vetting them, Brian Patterson, MD, physician administrative director for clinical AI at Madison, Wis.-based UW Health, told the Times. Telling patients the messages contain AI content could also cheapen the clinical advice, even when endorsed by their physician, Paul Testa, MD, chief medical information officer at New York City-based NYU Langone Health, told the newspaper.

2. People have a documented tendency to accept an algorithm's recommendations even if it contradicts their own expertise, Ken Holstein, PhD, a professor at the human-computer interaction institute at Pittsburgh-based Carnegie Mellon University, told the Times. This automation bias can cause physicians to be less critical while refusing AI-generated drafts, allowing for errors to reach patients.

3. A study published in The Lancet Digital Health found that GPT-4 made errors when answering hypothetical patient questions, 7% of which posed a risk of severe harm.

4. Some critics also point out that the technology can be useful in various parts of healthcare but question whether automating human interactions is a good use of the tool. "Even if it was flawless, do you want to automate one of the few ways that we're still interacting with each other?" Daniel Schiff, PhD, co-director of the Governance and Responsible AI Lab at West Lafayette, Ind.-based Purdue University, told the Times.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars