Skip to site content
New to Availity? Get Started

AI, Oh My!

The Newbie’s Guide to Artificial Intelligence in Utilization Management

Artificial intelligence (AI) is everywhere in healthcare. One week it’s in the headlines for spotting cancer earlier than doctors. The next week, it’s writing clinical notes or answering patient questions in a chatbot.

But here’s the thing—AI isn’t just one thing. It’s a whole set of different technologies. Some are great for clinical research. Some are perfect for automating office work. Others help patients directly. And some simply don’t belong in high-stakes decisions like whether a patient gets a treatment approved.

That last point is especially important when it comes to utilization management (UM). Health plans are under pressure to speed up decisions, reduce provider frustration, and meet new CMS rules. AI can help. But the right kind of AI can make UM faster, fairer, and more transparent.

Let’s break it down.

What Are the Different Kinds of AI in Utilization Management?

Here are the main categories you’ll hear about, with simple examples:

  • Analytical AI: Analytical AI applies deterministic, rule-based logic to structured data. It excels in scenarios where transparency, auditability, and compliance are critical. In prior authorizations, this means using evidence-based guidelines and policy-driven frameworks to make determinations that can be traced and validated. Analytical AI is ideal for processes like clinical coding, claims validation, and prior authorization because these tasks demand precision and regulatory adherence.
  • Generative AI: Generative AI creates new content—text, images, or even synthetic data—based on patterns learned from large datasets. Its strength lies in summarization, drafting, and conversational interfaces. In healthcare, generative AI can streamline administrative tasks such as creating patient education materials or summarizing lengthy clinical notes. However, it is not suited for decisions that require strict compliance or deterministic outcomes, as its outputs are probabilistic and difficult to trace or audit.
  • Predictive AI: Predictive AI uses historical data to forecast future events or behaviors. In healthcare, predictive models can identify patients at risk for chronic conditions, anticipate hospital readmissions, or optimize resource allocation. These insights help clinicians intervene earlier and improve population health outcomes. Predictive AI is powerful for planning and prevention, but its recommendations should always be paired with human judgment to avoid unintended bias.

Each of these technologies is powerful. But not every type belongs in UM.

Why Don’t Some Types of AI Work for UM?

Three popular AI types often get confused with being “the solution” for UM, but they’re not the right fit on their own:

  • Predictive AI: Many vendors lean on this. They train models on historical data and then predict whether new requests should be approved. The problem? It’s a black box. Providers can’t see why the decision was made. Plans can’t tie it back to medical policy. And regulators won’t accept “because the model said so” as a rationale.
  • Generative AI: Great for creating drafts of documentation, but not for deciding if a patient’s chemotherapy or surgery should move forward. That’s too high-stakes to be left to a model that isn’t grounded in codified medical policy. It also hallucinates and fills in gaps without informing the user.

The takeaway? These tools have value, but they’re not built to handle the accountability and transparency UM demands.

Why Providers and Patients Should Care About the Kind of AI

It’s tempting to think only regulators worry about AI. But providers and patients care deeply about how decisions are made, especially when it comes to prior authorization.

  • For providers, predictive black-box AI feels like yet another barrier. They don’t know what criteria are being used or how to appeal inconsistent determinations. That erodes trust, and trust is already thin between payers and providers.

Transparent, Analytical AI changes that: providers can see that decisions are tied directly to plan policy and evidence-based guidelines.

  • For patients, it’s about delays. A two-week wait for a prior authorization feels like an eternity when you’re waiting on a biopsy or chemotherapy. Patients don’t care about AI models. They are focused on getting timely care. Responsible AI can help ensure less-complex cases are approved in seconds.

This is why the type of AI matters. It’s not just about efficiency. It’s about restoring trust with providers and protecting patients from unnecessary delays.

Cutting Through Market Confusion

Right now, there’s a lot of noise in the market. Vendors use “AI” as a catch-all term, but they don’t always clarify what kind of AI they’re talking about.

  • Some promote predictive models that guess at approvals but don’t show their work.
  • Others tout automation that speeds up paperwork but doesn’t touch clinical decisioning.
  • A few even market generative AI as if it can replace policy-based determinations.

For health plans, it’s confusing — and risky. That’s why the distinction matters: Black-box AI guesses. Transparent AI proves.

The first approach saves time but creates risk. The second builds trust, meets regulatory standards, and keeps the health plan in control.

So, What Kind of AI Does Work for UM?

The kind of AI that works for UM is the kind that is:

  • Transparent — every decision ties back to the clinical guidelines of the policy.
  • Explainable — providers and regulators can see why it was made.
  • Responsible — it makes recommendations to approve, but never to deny. It leaves the final decision to human clinicians.

That means a blend of:

  • Decision-Support AI: Applies plan policy and clinical guidelines in real time, so decisions are consistent and audit-ready.
  • Natural Language Processing (NLP): Pulls key details out of clinical notes, ensuring requests have the right information the first time.
  • Agentic Automation: Handles the busywork of routing, pre-filling, and auto-approving routine requests that clearly meet policy.

Together, these make UM faster, fairer, and more efficient — without sacrificing compliance or trust.

Why Responsible AI Future-Proofs UM

Choosing the right kind of AI isn’t just about solving today’s problems. It’s about setting up for the future.

  • Regulatory readiness: CMS-0057 requires explainability, timeliness, and interoperability. Black-box AI won’t pass the test.
  • Scalability: Transparent AI can expand to support chronic condition pathways, hybrid models of delegation, and next-generation care management.
  • Provider relationships: Trust is fragile. Using Responsible, Clinical-first AI helps repair it.
  • Member expectations: Patients want speed and fairness. Transparent AI makes approvals feel less like roadblocks and more like guardrails.

In other words: the AI you choose today determines whether UM will keep up with tomorrow’s demands.

Built for Better UM

Not all AI is created equal. For UM, only transparent, policy-driven AI works. That’s how health plans can deliver on the promise of being Built for Better:

  • Better Decisions — consistent, evidence-based, and explainable.
  • Better Experiences — faster approvals and less frustration for providers and members.
  • Better Interoperability — FHIR®-enabled workflows integrated into existing systems.
  • Better Scale — automation handles routine requests; clinicians focus on the complex cases.

This isn’t about chasing the latest AI hype. It’s about choosing the right kind of AI for the job — and making UM faster, fairer, and better for everyone involved.

Availity AuthAI™

Availity brings transparent, responsible AI to utilization management. Every decision is tied to policy, fully explainable, audit-ready, and compliant with federal regulations. Routine cases are automated in seconds, while clinicians focus on the complex ones. The result? Faster care, fewer frustrations, and UM that’s Built for Better.

Ready to see how transparent AI can transform UM? Let’s build a system that’s Built for Better—together

About the Author

Robert Laumeyer, the Chief Technology Officer at Availity, is a seasoned technology leader and inventor who has dedicated his career to bringing innovative technologies to fields needing improvement. Formerly the CTO of Olive and Verata Health, he has pioneered new technology categories across a spectrum of industries, from embedded software to finance.  

Renowned for his numerous patents, Robert has successfully sold companies or intellectual property to major players like Google, Microsoft, and Hewlett-Packard. At Availity, Robert uses his vast knowledge, experience, and passion to build transformative artificial intelligence (AI) systems that help provide personalized care to patients.  

Robert Laumeyer

Chief Technology Officer at Availity