4 min read
AI is reshaping customer interactions at scale. Chatbots, copilots and generative interfaces promise efficiency, automation and cost reduction. Yet, as highlighted by Hans van Dam and reinforced by recent insights from Gartner, many organisations are not failing because AI is weak. They are failing because they are applying it incorrectly.
The result is not just underperformance. It is the accumulation of what van Dam calls “conversational debt”, combined with a broader pattern Gartner identifies across enterprise AI: high failure rates driven by poor use-case selection, unclear value and misalignment with business reality.
For regulated industries, this is more than a design issue. It is a governance and strategy problem.
Hans van Dam is the co-founder and CEO of the Conversation Design Institute, an organisation dedicated to setting standards for conversational AI design. His work focuses on how humans interact with automated systems and how those interactions can either build or erode trust.
In his article on the “two debts that destroy conversational AI”, van Dam introduces a critical idea. Poorly designed or implementation of AI conversations create hidden liabilities. These are not immediately visible in dashboards, but they accumulate over time as friction, repetition, and unresolved user needs.
This aligns closely with broader enterprise AI trends. According to Gartner, more than 50% of generative AI projects fail to reach meaningful outcomes, often because they lack clear business value or are applied to the wrong use cases.
In other words, conversational debt is not an isolated UX issue. It is a symptom of a larger structural problem: deploying AI without aligning it to the real objective.
Van Dam’s core argument is that most conversational AI systems are designed for efficiency, not progress. Users are not looking for answers. They are trying to move forward.
When AI fails to support that goal, it creates friction. And when that friction repeats across thousands or millions of interactions, it becomes systemic.
When systems fail to support that goal, two types of debt emerge:
At scale, this becomes a systemic issue. Customers repeat themselves. Cases escalate. Trust declines.
Gartner’s research provides a parallel at the enterprise level. The primary reason GenAI projects fail is not technical limitations, but poor use-case selection combined with a lack of measurable business value.
This is exactly what happens in conversational AI. Organisations deploy chatbots to reduce cost, without asking whether the interaction can actually be resolved through automation.
Gartner also highlights a second critical factor: cost escalation. Without clear visibility and control, AI usage costs can grow rapidly and become a point of failure.
But the most overlooked impact is cultural. Failed AI deployments reduce confidence across the organisation. As Gartner notes, repeated failures make it harder to secure buy-in for future initiatives.
In regulated industries, this effect is amplified. When AI systems produce inconsistent or unclear outputs, professionals responsible for compliance will resist adoption.
This is where the choice of technology matters. Conversational AI often relies on probabilistic models that generate variable outputs. In contrast, many regulated interactions require deterministic logic, traceability and consistency.
A structured, expert-system approach would, in many cases, have been more appropriate. Not because it is more advanced, but because it is better aligned with the problem.
It would be simplistic to argue that organisations should simply design better conversations or abandon AI altogether.
Large enterprises operate within constraints. Processes are shaped by regulation, legacy systems and risk frameworks. Conversations are not just user interfaces. They are extensions of policy and compliance.
This makes change inherently difficult.
Gartner reinforces this complexity. Many AI initiatives fail not in experimentation, but when moving from pilot to production. Integration challenges, governance gaps and unclear ownership all contribute to failure.
Even when AI tools are technically capable, adoption remains limited. This is visible across industries, including healthcare, where even approved AI tools are underused due to lack of trust, workflow disruption and accountability concerns.
The same applies to conversational AI. The barrier is not capability. It is alignment.
Gartner also warns that many organisations approach AI as a technology deployment rather than a business transformation. Without clear objectives, success metrics and cross-functional alignment, projects stall or fail.
This explains why so many conversational AI systems underperform. They are introduced as tools, not as redesigned processes.
From this perspective, the problem is not AI itself. It is the mismatch between technology and context.
Hans van Dam’s concept of conversational debt provides a powerful lens for understanding why many AI initiatives struggle.
When organisations deploy AI without a clear purpose, they incur hidden costs. When they choose the wrong type of AI for the problem, those costs scale quickly. And when projects fail, they do more than waste budget. They create resistance to future innovation.
Gartner’s findings reinforce this pattern:
For regulated industries, the implications are clear:
AI will continue to evolve. But its impact will depend less on its capabilities and more on how it is applied.
Because the real risk is not that AI fails.
It is that it creates debt while appearing to succeed.