In this Guide…

CX vendors have spent the last two years talking about AI.

Most of it means nothing to a team actually trying to run a feedback programme.

Here’s the one question every AI pitch should answer.

At a recent UKCCF event, a Cisco-led panel on AI and technology adoption brought together a room full of experienced CX practitioners.

All people from large, serious organisations who know their operations, understand their customers, and have sat through more vendor pitches than they can count.

After an entire session on AI, the most honest thing anyone said was this:

I still don’t really get what it means for me and my business and how it’s going to help my company be better.

If you’ve spent the last twelve months sitting through AI demos and still can’t answer the question “what would I actually stop doing after implementing this?” — that’s not a you-problem. It’s a vendor problem.

SAAS vendors aren’t answering the question. And until they do, no amount of “AI-powered” in the marketing copy is going to help anyone.

The Confusion Is Real

From conversations at events like this one, the pattern is consistent: CX practitioners are aware that AI is important (they’ve all used a LLM chatbot), they accept it’s relevant to their sector, and they have no idea what to do with it. Not because they lack technical knowledge, but because every vendor pitch describes what AI is rather than what it saves you from doing.

“AI is plastered everywhere,” as one attendee put it, “but nobody’s saying what it means for me.”

The frustration isn’t with AI itself. It’s with vendors rushing to tell you “we use machine learning to analyse customer sentiment at scale” instead of “here’s what your three-person team will do differently on Monday morning.”

We’re as guilty as the next VoC platform, we need to do better, because the result is CX teams who feel slightly embarrassed that they don’t “get it”, when the problem is our pitch, not your people.

Why AI Messaging Defaults to Features

Feature-led AI messaging is the path of least resistance for vendors.

Describing a feature requires no knowledge of your specific situation. “Our AI analyses customer feedback in real time” is a claim that applies to every customer regardless of team size, workflow, sector, or budget.

Describing an outcome is harder. It requires knowing your workflow well enough to say “you currently spend X doing Y, and after implementing this, you won’t.” That requires specificity. It requires understanding the buyer’s actual operations. And it requires being willing to be held to the claim.

So vendors stay vague. And CX practitioners stay confused.

There’s a second problem here: disruption anxiety. For a small team running CX for a large organisation, adopting a new AI capability isn’t just a technical decision. It’s an organisational one. The concern isn’t usually “does this AI work?” It’s “how do I implement this without creating chaos for my team, and what does my workload look like six months in?”

Most AI pitches ignore this completely.

The Question Every AI Pitch Should Answer

There’s a simple test you can apply to any AI vendor claim.

Ask them to finish this sentence: “After implementing this, your team will no longer have to…”

If they can answer that concretely with specifics about your team size, your workflow, your customer base, you’re in a useful conversation. If they pivot to platform capabilities, integration options, or the size of their training dataset, ask them to re-write the pitch and come back to you.

I’ll say it again: I’m not just calling out our competitors here. Benefit-led messaging is genuinely hard to do well, and most vendors (including, at times, vendors who should know better) default to features because features are easier to describe and demonstrate. We should hold ourselves to the same standard we’re asking others to meet.

What Benefit-Led AI Actually Looks Like

Here’s an example of what the difference looks like in practice.

In our experience with housing CX teams, feedback volumes make manual review impossible at scale.

A small team running feedback across a large housing association — hundreds or thousands of comments per month — physically cannot read every response. The common version of this problem is simply that things get missed. Scores get tracked. The comments don’t get read.

But buried in those comments are tenants with mould problems and a health condition. Tenants dealing with bereavement on top of a repair that hasn’t been fixed. Tenants who are struggling to communicate what they need. The signals are there. Nobody has time to find them.

This is exactly the kind of problem that well-implemented AI can solve — and it’s possible to be specific about how:

CustomerSure analyses free-text feedback comments to detect vulnerability signals in a way that’s aligned with the FCA and the Regulator of Social Housing. When those signals are present, the relevant feedback gets labelled automatically and the right person gets an alert.

The team doesn’t read 1,000 comments looking for the three that need immediate action. The system finds those three and tells the right person.

That’s not “AI-powered insights.” That’s: “You used to miss this, causing problems for tenants and increasing your regulatory risk. Now you don’t.”

The decision about what to do about the vulnerable tenant with the unresolved mould issue stays entirely with a human. The AI removes the needle-in-haystack problem.

This is an important distinction, and it’s one that honest AI vendors make clearly. (We’ll not mention vendors promising to replace your workforce with autonomous AI ‘agents’ for now.)

For what it’s worth: the FCA cited this approach in their March 2025 Consumer Duty review as an example of good practice in vulnerable customer treatment. External validation tends to be more convincing than self-certification.

Acknowledging What AI Doesn’t Do Is Part of the Value

Every AI pitch implies transformation. Most implementations are more modest, which is fine, because modest and genuinely useful is worth a lot more than transformative and vague.

The value of AI in a CX context isn’t replacing human judgement. It’s removing the manual labour that prevents humans from applying their judgement in the first place.

Being clear what stays in human hands also answers the disruption anxiety directly. “Here’s what the system does automatically. Here’s what you still decide. Here’s what your team’s work looks like six months from now.” That’s the information a small CX team needs before they can say yes to something new — and it’s the information that’s almost never included in an AI pitch.

The vendors who build trust in an AI-saturated market won’t be the ones with the most impressive capability claims. They’ll be the ones who can say, specifically: “After this, you’ll stop doing X. You’ll still do Y. And here’s exactly what changes.”

A Better Standard for AI Claims

You have the power to demand specificity.

The best AI in customer experience does something simple and valuable: it finds the signal in the noise so that your team — however small, however stretched — can spend its time on the decisions that actually require a human. That’s not transformative. It’s just useful. And useful, right now, is exactly what the market is missing.

We’re working on improving how we communicate this, but for now, if you want to understand the mechanics of how AI is used in CustomerSure, including where it applies and what controls you have over it, that’s documented in our help centre.

See what benefit-led AI looks like in practice

We can show you exactly how CustomerSure identifies vulnerable customers from free-text feedback — and what your team's workflow looks like before and after. No deck required.

Book a demo

Chris Stainthorpe
Chris Stainthorpe

Chris is a member of CustomerSure’s founding team. Since 2010, he has worked with our clients to help them put VoC programmes in place that respect the customer and deliver measurable results. The best part of his job is learning something new from every new client.

Read more:

blog

What Does a Complaint Cost?

Asked what a complaint costs, two housing CX leaders gave the same answer: “we'd love to give you that number; we can't yet.” Here's why — and what to take to your board instead.

Read more
blog

TSMs Are a Proxy, Not a Target

Tenant satisfaction measures are gameable through methodology and sampling. The dramatic year-on-year leap is the tell. Here's what housing leaders should measure instead.

Read more

You’re in good company

Beyond Housing
Philips
Barchester
Covéa
Ingenico
Connect Housing
Bristol Water
ICS Awards Finalist 2025 Housing Innovation Awards Finalist 2025 ICS Awards Finalist 2026 Housing Innovation Awards Finalist 2025