AI has change into a fixture in healthcare revenue cycle management (RCM) as finance leaders seek to supply a measure of relief for overburdened, understaffed departments facing unprecedented volumes of third-party audit demands and rising denial rates.
In line with the newly released 2023 Benchmark Report, growing investments in data, AI, and technology platforms have enabled compliance and revenue integrity departments to cut back their team size by 33% while performing 10% more in audit activities in comparison with 2022. At a time when RCM staffing shortages are high, AI provides a critical productivity boost.
Healthcare organizations at the moment are reporting 4 times more audit requests than received in previous years – and audit demand letters are running greater than 100 pages. That is where AI shines – its biggest ability is uncovering outliers and needles within the haystack across tens of millions of information points. AI represents a big competitive advantage to the RCM function, and healthcare finance leaders who dismiss AI as hype will soon find their organizations left behind.
Where AI Can Fall Short
Truly autonomous AI in healthcare is a pipe dream. While it’s true that AI has enabled the automation of many RCM tasks, the promise of fully autonomous systems stays unfulfilled. That is due partially to software vendors’ propensity to deal with technology without first taking the time to totally understand the targeted workflows and importantly, the human touchpoints inside them – a practice that results in ineffective AI integration and end-user adoption.
Humans must at all times be within the loop to make sure that AI can function appropriately in a fancy RCM environment. Accuracy and precision remain the hardest challenges with autonomous AI and that is where involving humans within the loop will enhance outcomes. While the stakes might not be as high for RCM as they’re on the clinical side, the repercussions of poorly designed AI solutions are nonetheless significant.
Financial impacts are essentially the most obvious for healthcare organizations. Poorly trained AI tools getting used to conduct prospective claims audits might miss instances of undercoding, which suggests missed revenue opportunities. One MDaudit customer discovered that an incorrect rule inside their so-called autonomous coding system was incorrectly coding drug units administered, leading to $25 million in lost revenues. The error would never have been caught and corrected if not for a human within the loop uncovering the flaw.
Likewise, AI may fall short with overcoding results with false positives – an area during which healthcare organizations must stay compliant in alignment with the federal government’s mission of fighting fraud, abuse, and waste (FWA) within the healthcare system.
Poorly designed AI may impact individual providers. Consider the implications if an AI tool will not be properly trained on the concept of “at-risk provider” within the revenue cycle sense. Physicians could find themselves unfairly targeted for added scrutiny and training in the event that they are included in sweeps for at-risk providers with high denial rates. It wastes time that must be spent seeing patients, slows money flow by delaying claims for prospective reviews, and will harm their popularity by slapping them with a “problematic” label.
Keeping Humans within the Loop
Stopping most of these negative outcomes requires humans within the loop. There are three areas of AI specifically that can at all times require human involvement to attain optimal outcomes.
1. Constructing a powerful data foundation.
Constructing a strong data foundation is critical, because the underlying data model with proper metadata, data quality, and governance is essential to enabling AI to attain peak efficiencies. For this to occur, developers must take time to get into the trenches with billing compliance, coding, and revenue cycle leaders and staff to totally understand their workflows and data needed to perform their duties.
Effective anomaly detection requires not only billing, denials, and other claims data but in addition an understanding of the complex interplay between providers, coders, billers, payors, etc. to make sure the technology is able to repeatedly assessing risks in real-time and delivering to users the knowledge needed to focus their actions and activities in ways in which drive measurable outcomes. If organizations skip the information foundation and speed up the deployment of their AI models using shiny tools, it’s going to end in hallucinations and false positives from the AI models that can cause noise and hinder adoption.
2. Continuous training.
Healthcare RCM is a repeatedly evolving career requiring ongoing education to make sure its professionals understand the most recent regulations, trends, and priorities. The identical is true of AI-enabled RCM tools. Reinforcement learning allows AI to expand its knowledge base and increase its accuracy. User input is critical to refinement and updates to make sure AI tools are meeting current and future needs.
AI must be trainable in real-time, allowing end users to instantly provide input and feedback on the outcomes of knowledge searches and/or evaluation to support continuous learning. It must also be possible for users to mark data as unsafe when warranted to forestall its amplification at scale. For instance, attributing financial loss or compliance risk to specific entities or individuals without properly explaining why it’s appropriate to accomplish that.
3. Proper governance.
Humans must validate AI’s output to make sure it’s secure. Even with autonomous coding, a coding skilled must ensure AI has properly “learned” easy methods to apply updated code sets or cope with recent regulatory requirements. When humans are excluded from the governance loop, a healthcare organization leaves itself wide open to revenue leakage, negative audit outcomes, reputational loss, and rather more.
There is no such thing as a query that AI can transform healthcare, especially RCM. Nonetheless, doing so requires healthcare organizations to reinforce their technology investments with human and workforce training to optimize accuracy, productivity, and business value.