A sepsis prediction model developed by Epic and used by hundreds of U.S. hospitals and health systems performs worse than claimed on the prediction tool's fact sheet, according to a validation study published June 21 in JAMA Internal Medicine.
To evaluate the model, researchers from Michigan Medicine in Ann Arbor looked at nearly 40,000 hospitalizions across the health system from 2018-19. They eliminated scores from patients who were alerted by the model to have sepsis after a clinician had already intervened.
Findings showed the prediction tool correctly sorted patients on their risk of sepsis 63 percent of the time, lower than the 76 percent to 83 percent curve indicated on the model's fact sheet, researchers said.
The discrepancy is linked to several issues with the model's development, researchers said in a statement. First, the model includes data on all cases billed as sepsis, presenting issues since "people bill differently across services and hospitals and it's been well recognized that trying to figure out who has sepsis based on billing oces alone is probably not accurate," said Karandeep Singh, MD, study author and assistant professor of internal medicine at Michigan Medicine.
Additionally, the model's development defines sepsis as the time when a physician intervenes.
"In essence, they developed the model to predict sepsis that was recognized by clinicians at the time it was recognized by clinicians. However, we know that clinicians miss sepsis," Dr. Singh said.
The research team said the findings underline the need for additional regulatory oversight and governance of clinical software tools.