Effective AI in Healthcare RCM Requires Humans in the Loop

Artificial intelligence (AI) has become a fixture in healthcare revenue cycle management (RCM). In this area, finance leaders are desperate for ways to relieve understaffed departments struggling under unprecedented volumes of payer audit demands and rising denial rates without sacrificing accuracy or precision.

At a time when RCM staffing shortages are high, AI provides a critical productivity boost. By investing in data, AI, and technology platforms, compliance and revenue integrity departments have reduced their necessary team size by a third while performing 10 percent more in audit activities, compared to 2022, according to the 2023 Benchmark Report. In 2024, this productivity boost has further increased to 35% where teams can do more with less with AI.

Here is where AI shines. Arguably its greatest asset is assisting in uncovering outliers and needles in the haystack across millions of data points.

Unfulfilled Promises

While AI has enabled the automation of many RCM tasks, however, the promise of fully autonomous systems remains unfulfilled. This is partially due to software vendors’ propensity to focus on technology without first taking the time to fully understand the targeted workflows and the human touchpoints within them. It’s a practice that leads to ineffective AI integration and end-user adoption.

For AI to function appropriately in a complex RCM environment, humans must be in the loop. Human intervention helps overcome deficits in accuracy and precision – the toughest challenges with autonomous AI – and enhances outcomes, helping avoid the repercussions of poorly designed solutions.

Financial impacts are the most obvious repercussion for healthcare organizations. Poorly trained AI tools used to conduct prospective claim audits might miss instances of undercoding, which means missed revenue opportunities. For one MDaudit customer, an incorrect rule within their “autonomous” coding system was improperly coding drug units administered, resulting in $25 million in lost revenues. The error would never have been caught and corrected if not for a human in the loop uncovering the flaw.

AI can also fall short by overcoding results with false positives, an area under specific scrutiny due to the government’s mission of fighting fraud, abuse, and waste in the healthcare system.

Retaining Humans in the Loop

Again, keeping humans in the loop is the best strategy for preventing these types of negative outcomes. Three specific areas of AI will always require human involvement to achieve optimal outcomes.

1. Building a strong data foundation.

A robust data foundation is crucial, because the underlying data model, including proper metadata, data quality, and governance, is key to enabling AI to function at peak efficiency. This requires developers to get into the trenches with billing compliance, coding, and revenue cycle teams to fully understand their workflows and data needed to perform their duties.

Effective anomaly detection requires billing, denial, claims data, and an understanding of the complex interplay between providers, coders, billers, payors, etc. Ensuring the technology can continuously assess risks in real-time and deliver to users the information needed to focus their actions and activities in ways that drive measurable outcomes.

2. Continuous monitoring and training

AI-enabled RCM tools enable continuous real-time monitoring of risks and require ongoing education the same way professionals do to understand the latest regulations, trends, and priorities in an evolving healthcare RCM environment. Reinforcement learning allows AI to expand its knowledge base and increase its accuracy. User input is critical to refinement and updates to ensure AI tools meet current and future needs.

AI should be trainable in real-time. End users should be able to support continuous learning by immediately providing input and feedback on the results of information searches and/or analyses. Users should also be able to mark data as unsafe, when warranted, to prevent its amplification at scale.

3. Appropriate governance

Human validation is required to ensure that AI’s output is safe. For example, for autonomous coding to work properly a coding professional must ensure AI has properly “learned” how to apply updated code sets or deal with new regulatory requirements. Excluding humans from the governance loop leaves healthcare organizations open to revenue leakage, negative audit outcomes, reputational loss, and much more.

Without question, AI can transform healthcare RCM. But doing so requires that healthcare organizations augment their technology investments with human and workforce training to optimize accuracy, productivity, and business value.

About the Author 

Ritesh Ramesh is CEO of MDaudit, a leading healthcare technology provider that partners with the nation’s premier healthcare systems to reduce compliance risk, improve efficiency, retain revenue, and enhance communication between cross-functional teams.  As CEO, Ramesh is focused on driving growth and profitability for MDaudit with a customer-centric vision, strong team culture, and platform innovation. Ramesh has spent his entire career, which spans more than 22 years with leading professional services organizations, at the intersection of data, analytics, and emerging technologies, transforming business models across various retail and consumer-focused industries, including healthcare.  

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars