ChatGPT, the AI-based chatbot developed by OpenAI, was released in November and has already been showing promising results for the healthcare industry; it was recently found to have been successful at selecting the correct medical diagnosis 39 percent of the time.
Researchers at Boston-based Beth Israel Deaconess Medical Center found that GPT-4 can select the correct challenging medical diagnosis 39 percent of the time and include the correct diagnosis on its list of potential diagnoses 64 percent of the time.
Other researchers have already demonstrated use cases for this large language model in healthcare. Most recently, researchers at Evanston, Ill.-based Northwestern University said ChatGPT could be used for creating AI assistants that can educate low-literacy patients about their health issues.
The researchers also created a template that encouraged ChatGPT to link to a trusted medical database in its responses, as well as created an analytics engine using emergency department admissions data to train the AI on.
The AI-based technology is also making its way into health systems' EHRs.
On June 15, UC San Diego Health said a limited number of physicians will now have the option to draft responses to patient messages using ChatGPT. The integration will allow physicians to use ChatGPT to generate a draft response to patient messages they receive through its online patient portal, MyChart.
And physicians are finding their footing with the new technology, using it to craft more empathetic responses to patients.
Michael Pignone, MD, chair of internal medicine at the University of Texas at Austin, and his team used ChatGPT to address patients who have been drinking too much but have not been helped by behavioral health therapy.
The AI bot was able to write a compassionate script at a fifth-grade reading level, making recommendations easier for patients to understand.
"Doctors are famous for using language that is hard to understand or too advanced," Christopher Moriates, MD, co-principal investigator of the project at University of Texas at Austin, told The New York Times. "It is interesting to see that even words we think are easily understandable really aren't."