Differences between NPS and CSAT? Who cares. It’s all about events and relationships.
In his last post Stephen Hampshire discussed the pros and cons of Net Promoter Score and longer, more 'traditional' customer satisfaction surveys. In this post he argues that it's more important to have a clear purpose when you're surveying customers. Only then can you choose an appropriate surveying method (or methods).
The distinction between NPS and CSAT is a bit of a red herring. A much more important choice is whether to use event-driven surveys, a relationship survey, or both.
This choice can sometimes get bound up with the battle between NPS and CSAT, mostly because NPS surveys tend to be short and new, where CSAT surveys tend to be long and old.
Relationship surveys and event-driven surveys
A relationship survey is a snapshot in time of how your entire customer base feels about you. It answers the question How satisfied are our customers?
Event-driven surveys are triggered by the customer going through a particular event, such as signing up, visiting a branch, calling a contact centre or receiving a delivery.
They tell you how good a job you are doing at that touchpoint at that time. Those quick event-driven surveys lend themselves to all sorts of new survey techniques (IVR, SMS, etc.)
These are fundamentally very different, but complementary, tools. The loyalty behaviours that benefit your business are driven by the satisfaction of all customers, regardless of how recently they've come into contact with you or what happened when they did.
But this latent satisfaction can take a long time (years, or even decades) to change. Changes to specific events can be made much more quickly, and ultimately it's by changing what you do to customers that you will improve their levels of satisfaction.
Depth and granularity
A related question is how many customers do we need to talk to?
First we need to discuss the awful word granularity.
It's used in two completely different ways, and you need to get both into your customer measurement process somewhere.
I like to think of it as choosing between going deep or broad—you can't do both at the same time.
You can either get a small amount of information from a lot of people, or a lot of information from a few people.
The former gives you granularity in the sense of a huge amount of data that you can use to break down to a very low level within your business.
This is great for coming up with scores for individual members of staff, which in turn is a great way to promote engagement and belief in the survey.
What it doesn't give you is much in the way of insight into what lies behind the score, into the complex mind of the customer.
That kind of depth comes from a longer (and smaller scale) survey. This dovetails nicely with the use of both relationship and event-driven surveys.
What is your survey for?
Do you want to link customer attitudes about you to business outcomes, such as actual loyalty, sales and profit?
Then you need a relationship survey, but you also need enough units of analysis that you can establish the links. This is often harder than it seems.
Do you want to understand what the main drivers of customer attitudes to your business are?
Then you need a relationship survey, and you need a list of potential drivers.
But make sure that you keep the list short enough that the survey itself doesn't make your customers want to kill you, and make sure you capture verbatim comments as well. This will help you to understand what the key moments of truth are in your relationship with customers.
Do you want to performance manage your staff based on customer feedback?
Then you want event-driven surveys. Lots of them.
Start with the events that you know are key moments of truth and survey customers (with a very short survey) just after they've been through the experience.
Get enough responses that you can gather scores for individual agents, stores and so on if you can.
And do you use NPS or CSAT?
Frankly, it makes very little difference.
Either way, you need to understand where to improve and motivate your people to do it. Whichever headline measure works best for you, I'm happy.
Stephen is development manager for The Leadership Factor. He specialises in the advanced statistical analyses required for modelling information.