4 Misconceptions about predictive modeling in healthcare

If there is one phrase that best describes the value of predictive modeling, it’s “history predicts the future.”

Companies such as Google, Facebook and Amazon.com understand this lesson well, which is why you start seeing ads for products that you’ve looked at online about five seconds after you browsed for them.

This same thinking is now being applied to healthcare. Thanks in large part to the implementation of electronic health records (EHRs) and the filing of claims electronically, we now have massive amounts of data available to test correlations and trends in historical data. This information enables us to build highly accurate predictive models, especially since we can match the outcomes the models produce to the actual outcomes and adjust them as needed. It’s rather like starting with the answer to a math problem in school and then figuring out how to make the formula produce that answer.
Predictive modeling is already being used to determine the treatment of patients based on historical data. That’s not all it can do, however. It is equally adept at predicting financial outcomes, such as the likelihood that patients will pay the portions of the healthcare bill for which they are responsible. Armed with that information, providers can make much better decisions that help reduce bad debt while maintaining good relationships with patients.
The machine learning underpinning of predictive modeling was born from the statistics and math communities of the 60’s, so much of the theory has been around for a long time. But now that healthcare is data-rich, it has the potential to become The Next Big Thing. Still, it’s not a panacea for all of a healthcare organization’s financial challenges. That’s why it’s important to understand what predictive modeling is – and what it is not. Following are four common misconceptions.

Predictive modeling replaces rules
In rules-based analytics, you have to think of all the possible conditions upfront and build them into the rules engines in order to uncover problems. Unfortunately, if something that isn’t covered under the rules occurs, such as a missing charge for a device when inserting a pacemaker during a cardiac encounter, you have no way of catching it and the revenue is lost. In other words, you don’t know what you don’t know.
Predictive modeling overcomes that limitation by using machine learning to spot trends and patterns in charges, and identify the outliers. In the case of the pacemaker, if historically there is a device charge, but the paperwork for a particular patient doesn’t include one, the predictive model will catch it without the need for someone to build a rule for that situation.
The downside is that if the organization regularly misses the charge for the device, predictive modeling will see that as “normal” and won’t call it out. That’s why predictive modeling should be treated as an enhancement to – rather than a replacement for – a rules-based system. By using them together, the organization will catch more missing charges than either system will alone.

One predictive model covers all the bases
While it would be nice if there was “one predictive model to rule them all” so to speak, that is not the case. Different algorithms will produce different results from the same data, so it’s possible that even a well-constructed predictive model will miss certain patterns that become obvious to another.
The best approach is to build several predictive models based on different algorithms and run them in parallel to one another. As with any good Venn diagram, the point where all of the models intersect will have the highest probability of being accurate, and thus should be prioritized for attention. Then comes the points where almost all intersect, and so on. This method will also help reduce the time spent chasing false positives.

Once you build predictive models, you’re done
That would be nice, but it’s simply not the case. As you add more data, or additional sources of data, it’s important to adjust the predictive models to deliver the optimal results.

The need for these changes will show up most prominently as you test your predictive models – an event that should occur on a regular basis. As you compare the predicted outcomes to the actual outcomes, it will become very apparent if the reliability has increased, decreased or remained the same. More and better data should result in superior reliability. If it doesn’t, or if the actual outcomes are starting to vary further from the predicted outcomes, it’s time to adjust your predictive models.

Predictive modeling is only for large healthcare organizations
Because it’s new and requires the construction of complex algorithms, there can be a tendency to think predictive modeling will only work in a large hospital or health system. The reality is that it’s just as effective for small-to-mid-size providers. Perhaps more so since they have less margin for error in their financials.

The determinant isn’t the size of your organization; it’s whether your organization has collected enough historical data to build accurate predictive models and test them to ensure a high degree of reliability. A good rule of thumb is as long as your organization has been collecting data for more than a year, you have the raw materials to look for patterns and trends.

Prediction for your future
In this era of value-based reimbursement, it’s important to capture every dollar you can – as quickly as you can. Predictive modeling, when approached correctly, can help you hedge your bets and deliver better financial outcomes.

By Paul Bradley, Chief Data Scientist, ZirMed

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars