Background Image
THOUGHTS

From Denied Claims to Shadow AI: A Provider's Guide to Getting AI Right 

March 13, 2026 | 7 Minute Read

The pressure on health systems has never been greater. Physicians are burning out, administrative burdens keep climbing, and the financial margin for error is shrinking. AI promises to help. In many cases, it genuinely can, but the gap between a well-implemented AI strategy and a collection of disconnected tools is wider than most organizations realize.

We sat down with Scott Poulin, VP of Technology at Improving, to address the four areas where AI is having the most real-world impact on healthcare providers right now and where the biggest risks are hiding.

Revenue Cycle Management: Demand Results, Not Promises

Initial claim denials hit 11.8% in 2024, with nearly half stemming from missing or inaccurate data. That's a significant amount of revenue lost to preventable errors, and it's exactly the kind of problem AI is built to address.

The potential is real: eligibility verification, coding accuracy, documentation completeness, denial prediction. But the expectations problem is just as real. Many organizations have invested in RCM tools expecting immediate, dramatic improvement, only to find the results underwhelming. The issue usually isn't the technology, but the foundation beneath it.

"People expected AI to be right 100% of the time, day one," Scott notes. "That's impossible. But it can be much more accurate over time."

The path forward is disciplined, not dramatic. With the right data collection practices and risk analysis frameworks in place, organizations can steadily reduce error rates, from 46% down to 32%, then 20%, then 10%, as AI learns continuously from your own workflows.

At Improving, we help health systems build toward autonomous coding and billing by starting with that foundation. Before evaluating any RCM tool, the more important question is whether your organization has the infrastructure to let AI actually improve. If the data going in is flawed, no tool will fix what comes out.

Image 1 - From Denied Claims to Shadow AI: A Provider's Guide to Getting AI Right 

Clinical Documentation: Accuracy as a Partnership

The 2026 CMS physician fee schedule puts sharper emphasis on documentation accuracy, and the consequences of falling short are significant both financially and clinically. For many providers, this is where uncertainty around AI runs highest. What happens when something goes wrong?

The answer lies in how AI is positioned within the workflow. It isn't a replacement for physician judgment. It's a check on it and a resource that works both directions.

"It's a second set of eyes," Scott explains. "The doctor can code things effectively, but if something is missed, or vice versa, the AI codes it, and the doctor catches it. You catch more misdiagnoses, incomplete documentation gets filled out, and coding mismatches are addressed in near real time."

There's a regulatory dimension here too. When new coding rules go into effect, AI-powered monitoring systems can absorb and apply those changes faster than any clinician working through a new memo. Better yet, when the system applies a new rule, it can explain why directly to the provider, at the point of care. Compliance becomes a learning moment rather than an audit finding.

Improving builds clinical documentation AI integrations with this partnership model in mind. The goal is a system that makes your team more accurate and better informed with every interaction, not one that introduces new uncertainty.

The Workforce Crisis: Integration Over Addition

With an estimated 96,000 physicians projected to exit the healthcare workforce in 2026, clinical staff are already stretched. Ambient scribes and AI note-taking tools were positioned as a relief valve with less documentation burden and more time for patients.

The tools can help. But there's a more important question that often goes unasked: Is documentation really the source of the burden?

"Most of these physicians and nurses are burnt out from all of these various different things," Scott observes. "There are a bunch of things they have to do that are more cognitively demanding than just taking notes."

A telling real-world case makes the point clear. A group deployed 30 different AI systems inside an emergency room to help manage clinical tasks. Each system demonstrated value individually. But when all 30 ran simultaneously, clinician performance dropped by 30% because staff now had to manage 30 different interfaces to make a single decision.

More tools don't equal less burden. Integration does. The question isn't whether any single AI product performs well in isolation, but whether your AI ecosystem works as a coherent whole, surfacing the right information at the right moment without demanding more attention to it.

That's where Improving focuses its work with health systems: not just evaluating whether a tool performs, but whether it genuinely fits the way care is delivered, and whether it reduces cognitive load or quietly adds to it.

image 2 - The AI Reckoning in Healthcare: What Payers Can't Afford to Ignore 

Shadow AI: The Risk Already Inside Your Organization

Industry reports have flagged shadow AI as the #1 risk for healthcare providers in 2026. It's easy to see why.

Shadow AI isn't a dramatic breach or an obvious policy violation. It's a clinician using a popular AI platform to draft a note or summarize a patient record because it's fast and convenient. The intentions are good. But the data doesn't stay contained.

"These LLM systems may claim they don't use your data for building their models," Scott explains, "but they've explicitly said in public interviews that they use it for data science activities. That means your patient data, in some form, is out there."

In financial services, data exposure is serious. In healthcare, it can violate HIPAA, compromise patient trust, and create significant liability. Preventing it requires a system. Improving addresses Shadow AI through a four‑pillar framework we apply both internally and with our healthcare partners:

  • Policy — Clear, written rules about which AI tools are permitted and under what circumstances. 

  • Technology — Approved tools with defined capabilities and documented compliance standards. 

  • Training — Staff who understand not just how to use approved tools, but why the guardrails exist. 

  • Monitoring — Systems that sit between your infrastructure and external platforms, tracking what data is being shared and flagging policy violations before they become incidents.

Improving’s Technology pillar is built on AWS, leveraging native security controls, private cloud infrastructure, and HIPAA‑eligible services to support healthcare workloads securely. This platform approach allows providers to standardize how AI tools are deployed, governed, and accessed, reducing the risk of unmanaged Shadow AI while still enabling innovation. By anchoring technology decisions to a secure, compliant cloud foundation, organizations can enforce policy and monitoring consistently across teams.

This isn’t a one‑time exercise. It’s an ongoing governance practice, and it’s quickly becoming the baseline expectation for health systems operating responsibly in an AI‑enabled environment.

Where to Start

The complexity of AI in the provider space can feel overwhelming. Tools multiplying, regulations tightening, workforce pressure mounting. It's a lot to navigate while also running a health system.

Improving's approach is to meet organizations where they are. Some health systems are already running mature AI programs and need help tightening governance or integrating fragmented tools. Others are earlier in the journey and benefit from a structured assessment of where they stand and what to prioritize first.

What's consistent across every engagement is a straightforward belief: getting AI right in healthcare isn't optional. For providers, it's becoming the foundation of sustainable, compliant, and financially sound operations. Ready to talk about where your organization stands? Let's start the conversation.

AI

Recent Thought Leadership