We’ll look at three of the most popular satisfaction metrics, Net Promoter (NPS), CSAT and CES, and explain which to use in which situation.
More importantly, we’ll look at why the metric you pick doesn’t actually matter, and give you three concrete things you can improve today which will lead to a measurable increase in customer satisfaction.
Let’s get one thing straight before we start:
A wise farmer once said,
“You don’t fatten a pig by weighing it.” – 🐖
Measuring customer satisfaction is not the same thing as improving satisfaction.
This is the single biggest missed opportunity in customer service improvement. Worldwide, in every industry, firms realise that satisfaction is important, so try and measure it by gathering customer feedback.
But they gather data about satisfaction in a way which does nothing to improve satisfaction. Or worse, a way that decreases satisfaction.
You wouldn’t measure rooms in your house with a tape measure that put holes in the wall. You wouldn’t check the temperature of a glass of water by pouring mercury into it and hoping for the best, but time and time again, companies try to measure satisfaction in a way which annoys their customers.
Pro tip: That doesn’t end well.
Which is why, if we may be so blunt, if you haven’t got the basics right, it genuinely does not matter which metric you pick. NPS, CSAT, CES, … or something exciting and home-grown.
They’re all decent measures, and we’ll walk you through the pros and cons of each, but if your ultimate goal is happier customers (that is your ultimate goal, right?), none of them will make your customers happier.
So work out how you’re going to do that first.
There’s one good reason to measure satisfaction – it makes it easier to talk about.
Once you’re measuring satisfaction accurately and fairly, you can…
So, it’s totally fine to measure satisfaction.
What isn’t fine, is trying to measure it before you’ve got a concrete plan to improve it.
We’ve decades of experience in improving customer service.
We’ve seen methodologies, metrics, and fads come and go – ISO 9000, Six Sigma, NPS, Social Customer Service, Agile, Chatbots (remember them?), and now AI – all promising to totally revolutionise customer service.
But just think about your own experience…
…is customer service in general better than it was 10 years ago? Or 20? If any of these things were the ‘silver bullet’ that actually delivered perfect customer service, surely customer service would be great by now?
All of these methods are fine, but they’re just tools to get the job done, not magic that can save a company that’s got the fundamentals wrong.
We want you to be successful – with happier customers and a happier team. So, before we break down which metric to choose, we’ll look at the three fundamental things you need to get right first: Culture, Strategy and Execution. Get them wrong, and you’ll struggle to improve satisfaction – no metric can save you.
Then, we’ll compare the three the most popular satisfaction metrics, and help you work out which is best for you.
A powerful thing happens when you ask a customer for feedback.
Your customer will start to dare to believe that your company cares.
Not whether you care - you wouldn’t be doing it otherwise, but whether there is a goal shared by everyone in your company to deliver great service.
You cannot deliver great service by yourself; it requires everyone in every role to commit. And unless your company culture expects that everyone will deliver great service, customers won’t experience great service.
What’s rewarded in your company? What values are upheld consistently? What behaviours are modelled? What do people get praised for? What’s valued?
Until every leader and everyone has a shared vision and commitment to delivering great service there is no metric, technique or silver bullet that will improve customer service.
You need to fix culture first. Paying lip service isn’t enough, it has to be ingrained in the DNA of the company that better service leads to better financial performance.
This requires belief and attitude, not rules, scripts and tick boxes.
If you’re fortunate and you’re in a company where that’s all in good shape, then read on.
Great, so everyone in your company 100% understands that happier customers leads to a happier bottom line, and that it’s everyone’s job to make them happy.
What are you going to do about it?
We’re sure the budget airline who send us a 57-page customer survey every single time we travel with them genuinely do want happier customers. And the hotel chain who send us 5 nagging survey reminders every time we use one of their conference rooms are probably keen on the idea of customer satisfaction.
It just feels like… Perhaps they haven’t thought things through.
Culture is great, but unless you have a crystal-clear plan for how you’re going to use customer feedback in your business, it’s better not to bother asking.
Who’s going to feel good about your brand after wading through 57 pages of questions only of interest to your marketing team?
Are nagging ‘reminder emails’ a frontier-pushing customer experience? Or just a way of artificially inflating response rate?
And even if customers do eventually get ground down by your reminders, and slog through your mammoth, boring survey, what are you going to do with the responses?
Because if you’re just hoping to use them in a process improvement project one day, then all the customers who have left you feedback begging you to solve their problems now will have switched to a competitor long before you get round to tackling the underlying issues.
Staying with that final point, What does each customer reasonably expect to happen after they have given feedback?
The answer? Customers might want you to respond
They might not want to be put in a spreadsheet and drawn on a graph. They might have a problem which they need you to fix, and quickly.
If you’re committed to service, you need to be prepared to fix those problems.
Not in “up to 5 business days”. Seriously? If I walked into a bricks-and-mortar store and asked a question, it would be OK to make me stand waiting 5 days for an answer?
Of course not - so it’s definitely not OK to make someone who’s taken time out of their busy day to give you feedback wait that long either.
It’s totally fine to collect feedback and use the results for process improvement, but if you don’t want to decrease customer satisfaction by simply asking for feedback you need to be set up to deal with the responses as they arrive.
(Pro tip: This is one of the reasons why annual surveys are a bad idea)
Ok, now you’ve sorted the essentials:
…You’ve guaranteed that your effort to measure satisfaction isn’t going to …decrease satisfaction.
It’s time to pick a metric!
We’ll look at the pros and cons of three of the most popular metrics – NPS, CSAT and CES.
Respondents are then characterised as promoters, passives or detractors according to the score they give you.
NPS is calculated on a scale from -100% to +100%. A positive score indicates that your company will grow, negative means it is likely to decline.
NPS has a few benefits:
In our experience, if you do NPS correctly, it has few drawbacks. The only significant one is that the ‘recommend’ question isn’t appropriate for all circumstances (but even then, NPS allows you to ask another question if you really, really want to!)
Trouble is, a lot of people don’t do NPS well, and so introduce a few problems. Here’s some common ones we see:
There’s no one fixed, ‘official’ definition of what CSAT is.
Everyone agrees it’s basically “ask your customers if they’re satisfied”, but after that, all bets are off.
You can use any scale, ask the question in any way, and do anything you like with the results, and still call it CSAT.
In our view, that’s not a massive problem.
As long as you’re doing all the other bits of your feedback process right (asking customer-centric questions at the right time, and following up on the responses), it’s fine to ask customers how satisfied they are.
Here’s a few pointers to help you get it right:
Problems with CSAT?
Like NPS, a lot of the criticisms of CSAT (see this article for a few) can be answered with a shrug and a ‘so what’?
Like NPS, Customer Effort Score was first introduced to the world in the weighty Harvard Business Review.
Its creators, Matthew Dixon, Karen Freeman and Nicholas Toman argue, based on studying over 75,000 customer interactions, that great customer service is very bad at producing loyal customers.
Customers are loyal to great products and powerful brands. Customer service interactions can only do a little to improve loyalty, but can do a lot to destroy it (this probably feels familiar if you’ve ever been on the receiving end of bad customer service!)
So, they argue, if you’re trying to ‘measure’ customer service, it makes no sense to try and measure the ‘best case scenario’ – it doesn’t matter if some customers are delighted if the majority are frustrated. Instead, the metric they propose is more of a ‘worst case scenario’:
“How much effort did you personally have to put forth to handle your request”
(Measured on a scale from 1 (very low effort) to 5 (very high effort)).
CES’ creators claim that it has greater predictive power of future customer behaviour than NPS (and far greater than CSAT).
We like CES because the question itself is customer-centric (no customer wants to put a lot of effort into sorting out problems caused by an organisation). But remember – as with all metrics, make sure it’s not the only question you ask. (Read our guide to customer survey questions if you need help working out what to ask).
Be very clear about why you’re doing customer feedback.
Metrics and process improvement are important, but you’ll struggle to improve ‘customer satisfaction’ unless you improve the satisfaction of individual customers. One by one. Just exactly the same way that you won them as customers in the first place.
So the main goal should not be about gathering data. The main goal should be to check that the experience of every individual customer is excellent every time they deal with you, and to fix it straight away if something has gone wrong.
Improving your processes can then be done just as well, and probably better, as a by-product of the data you collect when you’re achieving that more important goal.
The most important thing to remember is that success is governed by principles that are far more important than which customer satisfaction metric you choose.
These factors have much more impact:
Once you’ve got the basics covered, and only then, how should you decide which metric to use?
Whichever one you choose, you’re now equipped for happier customers, a happier boss, and a happier team!
Find out why a low response rate is great; a high response rate can be dreadful, and what response rate you can expect from your own surveys.
This is almost always a bad idea. Learn the businesses it does work for, and what you can do instead if you aren’t one of them.