Most of ChatGPT's answers to drug questions are wrong: Study

Nearly 75% of ChatGPT's answers to drug-related questions are incomplete or wrong, according to a study conducted at Brookville, N.Y.-based Long Island University.

Sara Grossman, PharmD, a pharmacy professor at the university and lead author of the study, and her team also found that ChatGPT sometimes generated fake citations to support answers, some of which could endanger patients. 

The study first posed 45 drug-related questions to pharmacists, which were reviewed by another investigator and used as the standard. A lack of literature for a data-driven response narrowed the field to 39 questions for ChatGPT to answer. 

Only 10 of the 39 ChatGPT responses were considered "satisfactory" by the parameters of the study, which spanned 16 months between 2022 and 2023. For the other 29 questions, 11 responses did not answer the question, 10 were false and 12 were incomplete (some answers fell into more than one category). 

Each prompt asked ChatGPT to provide references, and only eight responses fulfilled this request. Of those eight, all of them included nonexistent citations. 

"Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information," Dr. Grossman said in a news release shared with Becker's. "Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources."

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Articles We Think You'll Like

 

Featured Whitepapers

Featured Webinars