Interest in ChatGPT, the artificial intelligence-powered chatbot and text generation tool, is growing among physicians, with nearly 43 percent saying they would consider using it for medical research, according to a recent poll.
G-med, an online crowdsourcing platform for physicians, led the poll, and said that 42.69 percent responded in favor while 32 percent said they would not use it and around 25 percent were unsure. The poll had 424 respondents.
"The poll was conducted in the context of a growing concern about the use of large language models in scientific writing, and the need for transparency and integrity in research methods," G-med wrote in a blog post.
Transparency, integrity and ethics are at the heart of an ongoing debate across the scientific community around using the tool in research.
Bradley Malin, PhD, a medical ethicist with Vanderbilt University Medical Center in Nashville, Tenn., who also leads an ethics group for the National Institutes of Health's Bridge2AI program, told Becker's what is important for physicians to keep in mind about ChatGPT for now is that it is a new tool and can be useful, but there is still much to learn about its accuracy and reasoning.
"It should definitely be used with caution at the moment, in that it may assist and speed things up for the investigator who uses it to search with, but they're still going to have to go off and verify to see if the information has been provided to them is correct," Dr. Malin said. "That's the challenging part. I don't think anybody is ready to just completely trust any of the information that a privately owned AI system is going to provide to the end user at this time."
Dr. Malin also noted that ChatGPT is not trained on the most up-to-date information — right now, it only includes data up to 2021. So for physicians looking to utilize its functionality, they should check their sources and rely on other resources for verified, up-to-date information.
He also noted that there is, of course, a difference between using ChatGPT as a tool to help summarize things or as an encyclopedia search function, and using it to author or write up medical research for publication.
Guidelines recently published in Nature caution the scientific community to understand the implications of potential research being "authored" by the tool. The publication notes that it will not accept any research generated from ChatGPT or tools like it, and maintains that "researchers using [large language model] tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the [large language model]."
It is still too early in the tool's development for sweeping policies to come into effect for the medical profession, Dr. Malin noted, but he said there are several ongoing conversations about how ChatGPT's capabilities might be incorporated into the profession in various ways.
"On one side, I think you're seeing there's a lot of excitement. … There have been discussions around, will this be useful for hypothesis generation and potentially streamlining or speeding up aspects of the scientific investigation process?" he said. "On the flip side of that conversation, there are a lot of questions around the ethical nature of the use of this technology without having a clear understanding of how exactly it is reasoning or where the information is coming from."
Dr. Malin said physicians should expect these conversations to continue as more use cases and information about the tool unfolds.