Stanford (Calif.) Health is rolling out a swath of new artificial intelligence tools for clinicians across its hospitals, the system said in a July 9 press release.
A suite of AI-infused tools called"Patients Like Mine" will equip the health system's community-focused physicians with data and reference solutions specific to diverse populations with the aim of providing more data-informed patient care.
Another separate tool is also being deployed behind the scenes and is currently in beta testing with Stanford clinicians: ChatRWD.
ChatRWD is similar to the blockbuster generative AI platform, ChatGPT, through which researchers actually ran the MCAT.
"ChatRWD is currently in beta with nearly two dozen registered users at Stanford evaluating the tool for appropriateness in their specific workflows," Alexander Chin, MD, a clinical assistant professor and physician in Stanford's radiation oncology told Becker's. "Feedback to date has been very positive, and early indicators suggest that providers are finding utility in the tool for the day-to-day management of complex patients as well as for the acceleration of novel research."
ChatRWD uses a score to calculate the credibility and applicability of answers it generates and learns from that.
"ChatRWD is able to generate such a high rate of relevant answers compared to other LLMs because it generates a de novo study answering the submitted question as opposed to summarizing published literature," Saurabh Gombar, MD, an adjunct professor at Stanford's School of Medicine and chief medical officer of Atropos Health, which built the tool.
Rather than combing through the tens of thousands of medical and clinical studies published each year and regurgitating them back in a summarized answer format as ChatGPT does, ChatRWD instead "generates a de novo study answering the submitted question," Dr. Gombar said.
ChatRWD's ability to quickly review complex, clinically informed results has been very useful to clinicians at Stanford, Dr. Chin said, adding that the next foray to increase its value as a tool is getting it to clinicians at the bedside.
"As providers are tasked with caring for increasingly complex patients in an increasingly convoluted healthcare environment, they must be able to decipher the signal from the noise with limited time to make care decisions," Dr. Chin said. "This is where I am confident that tools like ChatRWD will shine. By rapidly surfacing relevant, scientifically robust evidence for clinical care to physicians, even when no published literature may yet exist, these tools can empower providers like never before. The next advancements will be streamlining integration into provider workflows for seamless use of clinical tools like ChatRWD at the bedside."
The tool's strong point is really in assisting clinicians when there is lacking evidence for a question at hand, Dr. Gombar underscored. Specifically, using the tool to ask ChatRWD about the use of GLP-1 drugs for a patient with a history of an organ transplant, would be one key example.
"This population is almost always entirely removed from clinical trials and physicians who treat these patients for non-transplant related care, i.e., diabetes, have to use their best judgment instead of hard evidence when deciding between treatment options," Dr. Gombar said. "Other [large language models] would not be able to answer these questions because no literature exists to summarize from. With ChatRWD they can run a custom study for the patient at hand and get an answer of how the drug performed in all similar patients encountered previously. Similarly, treatment options for all conditions that get frequently excluded from clinical trials are great uses of ChatRWD."
Right now, it is too early to understand the tool's entire efficacy and clinical impact, but Dr. Chin noted that so far, there haven't been implications.
"Early use-cases suggest more optimal selection of appropriate therapy in situations of complex drug-drug interactions and improved real-time care decision-making in the inpatient setting," Dr. Chin said.