ChatGPT responses to healthcare-related questions were on par with answers provided by healthcare providers, according to an NYU study published July 10.
A team of researchers from NYU provided 392 people with 10 patient questions and responses. Half of the responses were generated by healthcare providers, and the others were generated by ChatGPT.
The individuals were tasked with identifying which responses were generated by ChatGPT or human healthcare providers, as well as rate their trust in ChatGPT's responses using a 5-point scale.
People had trouble distinguishing between ChatGPT's responses and provider-generated responses. On average, individuals correctly identified chatbot responses 65.5 percent of the time and provider responses 65.1 percent of the time, according to the study.
The study also found that participants mildly trust ChatGPT-generated responses, with individuals rating its responses as 3.4 out of 5 for trustworthiness.
The study, posted in JMIR Medical Education, suggests that chatbots could assist in some patient-provider communication, including administrative tasks and common chronic disease management.