AI Doesn't Fix Bad Foundations. It Finds Them

If your processes, your data, and your customer journeys are broken today, AI won't fix them.

Admin

4/19/20264 min read

If your processes, your data, and your customer journeys are broken today, AI won't fix them.

It'll amplify the cracks.

That line came out of a comment thread on my last article, and the more I sat with it, the more I realised it deserves its own piece. Because the gap between what AI promises in a vendor demo and what it delivers in a real contact centre comes down to one thing almost every time.

The foundations weren't ready.

The Demo Is Always Clean

Every AI contact centre demonstration looks the same.

The customer calls. The AI understands immediately. It retrieves the account. It offers the right solution. If it needs to escalate, it hands to a human with perfect context. The human resolves it in thirty seconds.

It's impressive. It's also performed against clean data, clear intents, and a controlled environment that looks nothing like your operation.

Nobody demos AI against a CRM with 40% duplicate records. Nobody shows you what happens when the AI hits a knowledge base that hasn't been maintained in eighteen months. Nobody runs the demo against your actual IVR intents — the ones built by a committee in 2021 and never properly reviewed since.

They demo the ceiling. You inherit the floor.

Why Most Foundations Aren't Ready — And Why That's Not Your Fault

Here's the uncomfortable truth: most contact centres have been papering over bad foundations for years. Not through negligence. Through necessity.

Human agents compensate. Naturally. Invisibly.

An agent sees "John Smith" appear three times in the CRM — a duplicate from a migration, a legacy record, and the current one. They know which is right because they've been doing this job for two years. They work around it without thinking.

An AI has no institutional memory. No workaround instinct. No context for which record is current. It either picks wrong or it fails. And it does this at scale, across every interaction, simultaneously.

The same applies to intents. Your agents understand what a customer means when they say something ambiguous because they have years of pattern recognition to draw on. An AI model needs that pattern made explicit, bounded, and maintained. Most contact centre intent libraries are neither.

And knowledge bases. Your best agents don't use the knowledge base much anymore — they know the answers. Which means nobody noticed it's three product versions out of date until the AI started confidently telling customers the wrong thing.

Human judgment masked the gaps. AI removes the mask.

What AI Actually Needs to Work

Strip away the vendor narrative and what AI genuinely requires is straightforward — but demanding.

Clean data. Not perfect, but intentional. Duplicates resolved. Records that mean what they say. A single source of truth your AI can trust.

Clear intents with firm boundaries. Not a sprawling library of overlapping intents built incrementally over years. Deliberate design around what the AI will handle, what it won't, and where the lines are drawn.

Good APIs to your systems of record. Your AI is only as useful as the data it can reach. If it can't reliably query your billing system, your order management platform, your case history — it's guessing. Or worse, it's confidently wrong.

A maintained knowledge base. Not the one that exists. One that someone owns, reviews, and keeps current. An AI surfacing outdated information at scale is worse than no AI at all.

A human agent experience worth having. Because AI escalation only works if the handoff lands somewhere good. If the agent desktop is a mess, if the context doesn't transfer, if the agent is overwhelmed — the AI just moved the problem downstream.

This Is a Business Problem, Not a Technology Problem

The instinct when a contact centre AI project struggles is to look at the technology. Different model. Better training. More configuration.

Usually the technology isn't the issue.

The issue is that the organisation tried to implement an AI layer on top of processes and data that weren't designed with AI in mind — and in many cases weren't working optimally without it.

Getting AI-ready isn't a technology project. It's a data project, a process project, and in most organisations, a change management project. It requires asking hard questions about what you actually know about your customers, whether your systems reflect reality, and whether your intent design reflects how customers actually behave — not how you wish they would.

None of that is the vendor's job. None of it shows up in the implementation timeline. And almost none of it gets scoped properly before the contract is signed.

What This Means for the Platform Decision

This is exactly why the platform conversation should come last.

Not because platforms don't matter — they do. But because the most sophisticated AI platform in the market cannot compensate for a broken data foundation, poorly designed intents, or a knowledge base nobody owns.

The organisations that get the most from AI in their contact centres aren't necessarily the ones who chose the best platform. They're the ones who did the unglamorous work first. Who audited their data. Who redesigned their intents from the customer's perspective, not the system's. Who decided what AI should and shouldn't handle — and built firm fences around both.

That work isn't exciting. It doesn't feature in demos. You can't buy it from a vendor.

But it's the difference between AI that amplifies a great customer experience and AI that amplifies the cracks you've been papering over for years.

Fix the foundation. Then pick the platform.

Paul Wilson Co-founder, Canzuki | Vendor-agnostic CX consulting across NZ & AU | Problem first. Platform last.