February 12, 2021
After reading a scathing review of NPS, I went on a journey to understand theories behind surveying customers. When it comes to surveying your customers, there’s plenty of confusing acronyms on what you should be using.
The three most popular surveys are Net Promoter Score (NPS), Customer Satisfaction (CSAT), and Customer Effort Score (CES).
It’s near clear when to use each and what the exact purpose is behind the survey. Worse, when Googling these survey terms, there’s enough SEO to keep a brigade of content marketers employed.
As I researched this more, I came to the realization that we need to look at why we’re running these surveys. In my mind, that purpose to make sure our service is meeting and exceeding our customer expectations. Customers use your service because they have a job to be done. Maybe they want a way to make presentations quick to look good to their boss. Maybe they have a workflow that’s a huge headache and your service makes that headache go away.
While a good score is nice, it misses the element of truly seeing where we’re falling short meeting expectations. That’s because surveying customers is a lag measure, not a leading measure. There’s not a button to make NPS go up more automatically. If the only incentive is high NPS scoring, without a focus on what could improve NPS, people will game the system while ignoring larger problems hiding behind the good scores.
What I love about all three survey types is when they give a free-form box for a customer to provide feedback. Hearing from the customer about the good, the bad, and the ugly is the best way to improve a service/product. Instead of only focusing on a high survey score, look at the low parts of the survey score. Are there common bugs people are reporting, is your support response time too slow, etc. Responding and acting on that feedback is more valuable than a good score.
To that point, one number derived from a survey isn’t going to tell you everything about your customer base. Much like how if you were to only focus on sign-up rate, you can’t ignore something like a 20% churn rate. Using a mix of all three surveys and listening to the pulse of your customers will give you the information you’re looking for.
I’d also suggest going out on your own and creating a survey that fits your needs the best. As an example, at Postmark we’ve run a “What annoys you about Postmark” survey as a listening post. We also ask new customers how they would feel if they could no longer user Postmark in a survey, with a free-form box, of course. The goal here is being a B2B service, word of mouth is a big driver for us, learning how new sign-ups think about Postmark is great to make sure we’re meeting our value proposition.
You know your service best, make surveys that work with outcomes that matter to you. While you’re looking to improve business goals with a survey, put the focus on improving the customer outcomes. After putting in the work to act on feedback, the good scores will follow.
Oh yeah, I did research each of these survey types….
The classic survey. Following an interaction with a support, you ask the customer how satisfied they were. e.g., how would you rate my reply, How would you rate the support you received?, rate your conversation with Brian. CSAT gauges if the issue was resolved and if the customer was happy. The survey also provides a great place for a customer to vent!
The big knock against CSAT doesn’t take into account reducing churn or loyalty. It’s very possible for someone to have a satisfactory support experience, but it doesn’t mean they’ll return. I’ve had it happen to me times, the person that helped me was fantastic, the situation or problem that lead to me reaching out was terrible.
And CSAT is only casting a net to people who have reached out to support, which is hopefully only a very small percentage of your customer base.
Introduced in 2003 by a Harvard Business Review article titled The One Number You Need to Grow, NPS surveys have taken over the business world.
The authors of the 2003 article wrote a book to back up their findings. Released in 2008, a revised copy titled Ultimate Question 2.0 came out in 2011. A portion of the book is available on Google Books.
The survey asks a simple question: On a scale of 1 to 10 how likely is it that you would recommend (company/product) to a friend or colleague? It’s common to provide a comment box to let one provide additional feedback.
Instead of sending after a support interaction, you send an NPS survey on a schedule, for instance to all active customers every 6 months.
Responses then fall into one of three buckets:
- Folks who gave you a score between 9 - 10 are Promoters.
- The people who selected 7 - 8 are Passive.
- And those who returned the survey back with a 1 - 6 are Detractors.
To calculate a NPS score, take the percentage of Promoters minus the percentage of Detractors. You’re left with a score that you can share with leadership, investors, or benchmark against those in similar industries.
But there’s a catch…
The problem is a 9 will mean something different from one person to the next. And depending on a particular date, that number will mean something different to a person.
On top of that, I’ve seen many NPS survey responses come back with a 1 and customer response of “I don’t know anyone to recommend this you”. In these situations, it’s great the person actually read the question asked, but the response hurts our overall score.
In a nutshell: The survey is easy to measure, and it’s easy to understand the score it produces. It’s hard not to go a week without running into a company performing one. Take a moment to go search your email for “How likely is it that you would recommend” you’ll find tons of emails!
These surveys hope to negate the negative aspects of NPS by being a predictor of loyalty. A pretty powerful idea, huh? An argument against NPS is it doesn’t determine future customer behavior, CES hopes to. This based on the thesis that the less effort someone has to expel to use a product or service, the more loyal they’ll come.
CES comes from another Harvard Business article, this one from the 2010 title Stop Trying to Delight Your Customers. In 2013, Matthew Dixon and CEB (the folks behind CES) came out with the groundbreaking book, The Effortless Experience, that introduced CES v2, which was a rewording of the original survey.
The survey is similar in structure to NPS. On a scale of 1 - 7 (7 being a Strongly Agree), asks To what extent do you agree or disagree with the following statement: This Company made it easy for me to handle my issue. The trailing part of the question can be modified based on the situation asked (i.e., This company made it easy for me to create a project in the app).
Companies who receive an average of 6 are considered a top-performing company.