4 min read
Artificial intelligence is often positioned as the solution to healthcare inefficiency. Yet the latest findings from the Blue Cross Blue Shield Association (BCBSA), reinforced by analysis discussed in the Healthcare Financial Management Association (HFMA), suggest a more uncomfortable truth. When the wrong technology is applied to the wrong problem, especially in a regulated, expert-driven system, the consequences can be expensive and culturally damaging.
For professionals operating in regulated industries, this is not just a healthcare story. It is a cautionary tale about governance, tool selection, and the limits of automation.
Healthcare billing is not a simple administrative function. It sits at the intersection of clinical judgement, regulatory frameworks and reimbursement rules. Coding decisions are governed by strict standards, audits and compliance processes, and are typically performed by highly trained specialists.
This matters because the healthcare system is not designed for rapid, technology-led disruption. It is a regulated environment in which change must be incremental, evidence-based, and clinically justified. As some hospital leaders have pointed out, coding is already subject to “strict federal and industry standards” with significant investment in compliance and auditing.
The BCBSA study highlights what happens when AI enters this environment without sufficient alignment. Analysing tens of thousands of maternity admissions, researchers found a surge in coded diagnoses, such as acute posthaemorrhagic anaemia, without a corresponding increase in treatment.
This is not a trivial discrepancy. It signals a disconnect between clinical reality and administrative representation. As BCBSA summarised, “something is disconnected”.
From a systems perspective, the issue is not just AI adoption. It is the attempt to optimise a deeply regulated, expert-led process using probabilistic tools that are not inherently aligned with deterministic compliance requirements.
The HFMA analysis of the same trend sharpens the argument. AI-powered coding tools are being deployed to improve efficiency and revenue capture, but in practice, they are introducing variability into a process that depends on precision.
The BCBSA data quantifies the impact. AI-enabled coding is associated with approximately 663 million dollars in additional inpatient spending and at least 1.67 billion dollars in outpatient costs.
At a micro level, the pattern is equally concerning. Diagnoses are increasing faster than treatments, suggesting that AI systems may be optimising for billing completeness rather than clinical accuracy.
The BCBSA infographic shows a sharp rise in anaemia diagnoses from around 3 to 15 per cent in high-growth hospitals, while treatment rates barely moved.
This gap is where cost inflation emerges.
But the deeper issue is cultural. When organisations deploy tools that produce questionable outputs, frontline professionals lose trust. In regulated environments, that loss of trust translates into resistance. Not just to the tool in question, but to future innovation.
This is a well-documented pattern across industries. As Gartner has noted in its work on AI governance, failed or misaligned deployments can create “change fatigue” and slow down broader digital transformation efforts.
In healthcare, the stakes are higher. Coding professionals, clinicians and compliance teams are not just users. They are accountable for outcomes. If AI introduces ambiguity into its decisions, it undermines both confidence and accountability.
A more appropriate approach, particularly in billing, would have been the use of expert systems. These deterministic systems follow explicit rules aligned with regulatory frameworks. They do not infer. They apply. In a domain where compliance is binary, that distinction matters.
By contrast, probabilistic AI models can generate plausible but unverifiable outputs. In billing, that creates risk.
It would be simplistic to conclude that AI has no role in hospital billing. Proponents argue that these tools are uncovering legitimate complexity that was previously under-documented.
Healthcare delivery is evolving. More routine cases are handled outside hospitals, leaving inpatient settings with more complex patients. This shift can naturally increase coding intensity without any manipulation.
AI can also reduce administrative burden. By analysing clinical notes and lab results at scale, it can identify comorbidities that human coders might miss, improving completeness and potentially supporting better care planning.
There is also a broader adoption challenge. Despite the hype, AI remains unevenly deployed in healthcare. Even in areas where tools have regulatory approval, such as diagnostic imaging, adoption is limited. Many FDA-approved AI tools are rarely used in practice due to integration challenges, workflow disruption and clinician scepticism.
This reinforces a key point. Healthcare does not resist AI because it is conservative. It resists because it is accountable. Every decision must be explainable, auditable and defensible.
Consultancies have acknowledged both sides of this tension. McKinsey has highlighted the potential for AI to deliver substantial efficiency gains, including hundreds of millions in savings for insurers through automation.
Yet the same dynamics that create opportunity also create risk. When both payers and providers deploy AI to optimise financial outcomes, the result can resemble an “AI versus AI” arms race, increasing complexity rather than reducing it.
In that context, the issue is not AI itself, but how and where it is applied.
The BCBSA findings, supported by HFMA analysis, point to a clear lesson. In regulated, expert-driven systems, not all automation is beneficial.
Healthcare billing requires precision, consistency and compliance. These characteristics are better suited to deterministic systems than to probabilistic AI models. When the wrong tool is applied, the result is not just higher costs but also erosion of trust and slower adoption of future innovations.
For professionals in regulated industries, the implications are broader:
AI will continue to shape healthcare. But its success will depend less on its sophistication and more on its fit.
Because in environments where expertise, regulation and accountability define the system, the real risk is not adopting AI too slowly.
It is adopting it incorrectly.