4 min read The Hidden Cost of AI Conversations No One Talks About

 

4 min read

All Posts

4 min read The Hidden Cost of AI Conversations No One Talks About

 

4 min read

The Hidden Cost of AI Conversations No One Talks About

AI is reshaping customer interactions at scale. Chatbots, copilots and generative interfaces promise efficiency, automation and cost reduction. Yet, as highlighted by Hans van Dam and reinforced by recent insights from Gartner, many organisations are not failing because AI is weak. They are failing because they are applying it incorrectly.

The result is not just underperformance. It is the accumulation of what van Dam calls “conversational debt”, combined with a broader pattern Gartner identifies across enterprise AI: high failure rates driven by poor use-case selection, unclear value and misalignment with business reality.

For regulated industries, this is more than a design issue. It is a governance and strategy problem.

Hans van Dam and the nature of conversational debt

Hans van Dam is the co-founder and CEO of the Conversation Design Institute, an organisation dedicated to setting standards for conversational AI design. His work focuses on how humans interact with automated systems and how those interactions can either build or erode trust.

In his article on the “two debts that destroy conversational AI”, van Dam introduces a critical idea. Poorly designed or implementation of AI conversations create hidden liabilities. These are not immediately visible in dashboards, but they accumulate over time as friction, repetition, and unresolved user needs.

This aligns closely with broader enterprise AI trends. According to Gartner, more than 50% of generative AI projects fail to reach meaningful outcomes, often because they lack clear business value or are applied to the wrong use cases.

In other words, conversational debt is not an isolated UX issue. It is a symptom of a larger structural problem: deploying AI without aligning it to the real objective.

Wrong use cases create both technical and cultural debt

Van Dam’s core argument is that most conversational AI systems are designed for efficiency, not progress. Users are not looking for answers. They are trying to move forward.

When AI fails to support that goal, it creates friction. And when that friction repeats across thousands or millions of interactions, it becomes systemic.

When systems fail to support that goal, two types of debt emerge:

  • Clarity debt: unclear, ambiguous or misleading interactions
  • Progress debt: failure to resolve the user’s underlying need

At scale, this becomes a systemic issue. Customers repeat themselves. Cases escalate. Trust declines.

Gartner’s research provides a parallel at the enterprise level. The primary reason GenAI projects fail is not technical limitations, but poor use-case selection combined with a lack of measurable business value.

5 Critical failure points sabotaging GenAi success

This is exactly what happens in conversational AI. Organisations deploy chatbots to reduce cost, without asking whether the interaction can actually be resolved through automation.

The cost of misalignment

  • AI optimises for containment, not resolution
  • Metrics focus on speed, not outcomes
  • Users experience friction rather than progress
  • Organisations accumulate hidden operational costs

Gartner also highlights a second critical factor: cost escalation. Without clear visibility and control, AI usage costs can grow rapidly and become a point of failure.

But the most overlooked impact is cultural. Failed AI deployments reduce confidence across the organisation. As Gartner notes, repeated failures make it harder to secure buy-in for future initiatives.

In regulated industries, this effect is amplified. When AI systems produce inconsistent or unclear outputs, professionals responsible for compliance will resist adoption.

This is where the choice of technology matters. Conversational AI often relies on probabilistic models that generate variable outputs. In contrast, many regulated interactions require deterministic logic, traceability and consistency.

A structured, expert-system approach would, in many cases, have been more appropriate. Not because it is more advanced, but because it is better aligned with the problem.

Complexity, regulation and adoption barriers

It would be simplistic to argue that organisations should simply design better conversations or abandon AI altogether.

Large enterprises operate within constraints. Processes are shaped by regulation, legacy systems and risk frameworks. Conversations are not just user interfaces. They are extensions of policy and compliance.

This makes change inherently difficult.

Gartner reinforces this complexity. Many AI initiatives fail not in experimentation, but when moving from pilot to production. Integration challenges, governance gaps and unclear ownership all contribute to failure.

Even when AI tools are technically capable, adoption remains limited. This is visible across industries, including healthcare, where even approved AI tools are underused due to lack of trust, workflow disruption and accountability concerns.

The same applies to conversational AI. The barrier is not capability. It is alignment.

Gartner also warns that many organisations approach AI as a technology deployment rather than a business transformation. Without clear objectives, success metrics and cross-functional alignment, projects stall or fail.

This explains why so many conversational AI systems underperform. They are introduced as tools, not as redesigned processes.

From this perspective, the problem is not AI itself. It is the mismatch between technology and context.

Reduce debt by choosing the right tool

Hans van Dam’s concept of conversational debt provides a powerful lens for understanding why many AI initiatives struggle.

When organisations deploy AI without a clear purpose, they incur hidden costs. When they choose the wrong type of AI for the problem, those costs scale quickly. And when projects fail, they do more than waste budget. They create resistance to future innovation.

Gartner’s findings reinforce this pattern:

  • Over 50% of GenAI projects fail due to poor use-case selection and a lack of value
  • Costs and complexity increase when governance is weak
  • Failed initiatives reduce organisational trust in AI

For regulated industries, the implications are clear:

  • Not every problem requires generative AI
  • Deterministic, expert-driven systems may be better suited for high-stakes interactions
  • Success depends on aligning technology with outcomes, not hype

AI will continue to evolve. But its impact will depend less on its capabilities and more on how it is applied.

Because the real risk is not that AI fails.

It is that it creates debt while appearing to succeed.

 

Recent Posts

4 min read The Hidden Cost of AI Conversations No One Talks About

4 min read The Hidden Cost of AI Conversations No One Talks About AI is reshaping customer interactions at scale. Chatbots, copilots and generative in...

Read more

When AI Meets Billing: A Costly Mismatch in Healthcare

4 min read Artificial intelligence is often positioned as the solution to healthcare inefficiency. Yet the latest findings from the Blue Cross Blue Sh...

Read more

Can AI Replace Customer Service in Regulated Industries?

4 min read Why insurance and healthcare must automate with precision or not at all. Ask any executive in insurance or healthcare whether AI could tran...

Read more

AI vs Insurance Intermediaries: Signal or Overreaction?

4 min read In February 2026, the launch of insurance comparison and quoting applications inside generative AI platforms has sent a clear signal to fin...

Read more
2026 Spixii white-paper - Automation with Certainty for Regulated Businesses

Download your FREE Spixii White Paper copy

Unlock the full potential of AI and automation in regulated insurance by downloading our white paper, which provides actionable insights for 2026:
  • Discover how to embed automation while ensuring 100% compliance and deterministic decision-making
  • Learn strategies to increase operational margin, speed of transaction, and customer trust
  • Understand how risk-aware innovation can drive sustainable growth and competitive advantage