top of page

How Much Should Healthcare Leaders Be Thinking About AI Governance

Pro Tip: A lot more than you might be.


I recently attended the HFMA Annual Conference in Denver, and all I can say is, “Wow, AI is here in a big way.” It was astonishing how many healthcare organizations are not only leveraging AI but thinking deeply about the long game, AI governance.   


What’s easy to overlook in the scramble to bring to market and evaluate new tools is what’s happening underneath them: how the models are trained, what data they use, who’s making the rules, and what happens when they get it wrong.  


And no, this isn’t something IT can “own” for you. 


AI governance is a leadership issue. It directly impacts your revenue and your reputation. If you’re not thinking about it yet, you need to start.


What Is AI Governance?


First, I want to clarify that this isn’t mere hearsay from a conference. The global AI governance market size was valued at $890.6M in 2024 and is projected to grow at CAGR of 45.3% through 2029


At its core, AI governance is about setting real, human-authored rules around how artificial intelligence is developed, used, monitored, and evaluated inside your organization.


It covers things like, how you...


  • Select AI vendors and vet their models

  • Ensure data privacy and regulatory compliance (HIPAA, CLIA, GDPR, etc.)

  • Detect and correct bias or inaccuracies in automated decision-making

  • Document model behavior over time, and who’s accountable when an algorithm makes the wrong call


It’s not about slowing things down but about making sure the things you put into motion don’t create unintentional harm or cost you in ways you can’t afford.


“AI Governance is where I believe the social impact work actually happens,” shares Blake Chambers, our Machine Learning Engineer, “It's fundamentally about relationships—an act of empathy, not compliance. Algorithms alone don't speak to human needs, and I've realized that the practices and processes we call ‘AI Governance’ serve to translate between what technology can do and what people need it to do.”

Why This Matters for Healthcare Leaders


To be blunt, your IT team may understand how AI works, but they don’t own your patient relationships or your revenue targets. You do. And that means the implications of an AI misstep, whether it’s an incorrect eligibility check, a biased triage system, or a privacy breach, land on your desk.


AI governance is the tool that protects you. Not just from regulatory fines (though those are real), but from:


  • Denials you can’t defend

  • Reputational damage you can’t reverse

  • Operational decisions you didn’t realize were being made by a machine


This isn’t sci-fi. It’s today.


Quick Example: The Eligibility Trap


Let’s say you’re using an AI-powered platform to verify insurance eligibility in real time. Great—faster throughput, fewer phone calls, better margins.


But what happens if that system starts misclassifying patients? Maybe it was trained on incomplete data, or it assumes certain zip codes are “high-risk” for coverage lapses. The result? You deny someone care they were actually eligible for. Or, you provide care, submit the claim, and it gets rejected because of a faulty front-end decision made by an opaque algorithm.


Is that the vendor’s fault? Is it your team’s? Did anyone catch the pattern before it affected 500 patients?


Governance answers those questions. Or at least makes sure you have a process in place before they come up.


You Don’t Have to Build a Governance Framework from Scratch


The good news? You don’t need to be an AI expert to put smart guardrails in place. Start small. Focus on visibility, accountability, and alignment with your values.


Here’s how we do it at FrontRunnerHC and LabxChange:


  1. Inventory Where AI Is Already in Use You may be surprised. AI is often embedded in third-party tools—scheduling systems, billing software, even call center scripts. Ask vendors directly: Does your solution use machine learning or AI? If so, how?

  2. Establish an Executive Stakeholder Appoint someone outside of IT (or ideally a cross-functional group) to own AI oversight. This doesn’t need to be a full-time job, but someone needs to be accountable for asking the right questions.

  3. Define “Acceptable Risk” What are you willing to automate? What decisions require human review? What data inputs are off-limits? Align with your legal, compliance, and clinical teams early.

  4. Build Evaluation into Procurement - Don’t just ask “Does this tool work?” Ask:


    - How was this model trained?


    - Is it explainable?


    - How does it handle outliers?


    - What data privacy measures are in place?

  5. Put an Audit Process in Place

    Even the best AI will drift over time. Build periodic reviews into your ops calendar. This can be quarterly or biannually, but it must exist. Especially in regulated environments.


What Leadership Needs to Watch Closely


Some of the biggest flags to look for when evaluating AI tools:


Red Flag

Why It Matters

Proprietary black-box model

If the vendor can't explain how decisions are made, you can't defend them to a patient or regulator. They should be able to define what they do in simple terms.

Trained on public healthcare data

This often includes outdated, biased, or irrelevant datasets. Ask for specifics!

No opt-out for AI decisions

Patients (and staff) should have the ability to escalate or override automated calls.

No version control or change logs

You need traceability if outcomes shift over time.


You wouldn’t roll out a new treatment protocol without documentation. Treat AI systems the same way.


Human-Centered, Ethically Built: What We Believe at Summit


We're not chasing the next shiny algorithm. We believe in human-centered AI, tools that work with people, not around them.


Blake reminds us that we are...


"building a system in the hopes that its outcomes and predictions will improve people's lives, accuracy, consistency, and trustworthiness. We are entering into social contracts with real people in a way that tries to understand what people actually need from a company for their lives to improve.”

AI is moving fast. You don’t need MIT grads to make smart decisions (although we have a few of those), but you do need a framework for accountability.


The longer you wait to address AI governance, the more ground you’ll have to make up later.


So, how much should you be thinking about AI governance?


If you’re a CEO, CFO, or revenue leader in healthcare: a lot more than you probably are right now.

 
 
 

Comments


bottom of page