CustomerSure supports three standard customer experience metrics:

These metrics are derived from customer feedback scores. They summarise sentiment at a high level and make it easier to track trends and compare performance — but they do not explain why customers feel the way they do.
This page explains how metrics behave in CustomerSure, how to interpret them correctly, and how they fit into the wider reporting model.
If you need a broader overview, read Reporting basics first.
Metrics in CustomerSure are:
Metrics are not:
Metrics tell you overall, how are things are going? Topics and comments tell you why.
Metrics are calculated automatically from rating questions that are marked as measuring NPS, CSAT or CES.
NPS is calculated using the standard formula:
(% Promoters − % Detractors)
Where, on a 0–10 scale:
NPS always results in a score between −100 and +100.
CSAT is calculated as:
(% of positive scores)
What counts as “positive” depends on the scale used (for example, 4–5 on a 1–5 scale).
CSAT is intentionally flexible. That flexibility makes it easy to use — but also makes benchmarking meaningless.
CES measures how easy or difficult an interaction was for the customer.
CustomerSure calculates CES as the percentage of low-effort responses, based on the effort scale used.
CES is best suited to transactional feedback, where the goal is to reduce friction and cost-to-serve.
You can measure the same metric across multiple surveys, journeys and touchpoints.
When you do:
You cannot measure the same metric more than once on a single survey.
Metrics can be sliced and compared by:
Slicing answers questions like:
See Segments for guidance on choosing the right slices.
High-level metrics like NPS, CSAT and CES are useful indicators — but on their own, they don’t explain why scores are high or low.
The Satisfaction river report helps answer that question.
It does this by correlating these top-line metrics with topic sentiment, to show which topics are most strongly associated with changes in overall scores.
It helps you understand:
The Driver report looks at the relationship between:
It then shows which topics are most strongly associated with higher or lower scores.
This allows you to move from:
“Our NPS has dropped”
to:
“Negative sentiment about call waiting time is strongly associated with lower NPS scores”. Let’s start looking at the customer comments to see how we fix that.
The Driver report shows correlation, not proof of causation.
That means:
The Driver report is most powerful when used as part of a loop:
Metrics show that something is happening. Drivers help you understand where to act.
Metrics respond to behaviour — they are not behaviour themselves.
Setting targets like “NPS must be +60” encourages:
If you target (or worse, incentivise) a high-level metrici, it’s likely your teams will hit it, but it’s less-likely that customer satisfaction will improve in a way which impacts your bottom line.
Small changes are often noise. If you, as a CX leader, aren’t sure why a score is moving in the direction it is, always sanity-check:
Used well, metrics are:
Used badly, they become:
Metrics are the start of the conversation, not the conclusion.