> I've seen NPS used as a way of keeping a pulse on a community; if it drops sharply, something is clearly wrong in a way that normal monitoring can't surface.
If this is your goal (and it's a good goal), there are way better questions than NPS to use here. I'd go with a simple "How did we do today?" question, versus the convoluted NPS mechanism.
Yeah, probably. It's the sort of thing where someone's going to want to know your NPS anyways, so if you're collecting that data you may as well break it apart a little. And by no means do I think you shouldn't be doing other sorts of user research.
> Growth is a single number, and NPS is measuring growth, not UX.
Connect the dots for me on how NPS measures growth. Where does it tie to growth at all?
> NPS is trying to measure your customer birth rate by asking how many customers are (or intend to be) pregnant.
Horrible analogy, but ok. I'd say, if there's any equivalent, it's asking how many people think they are likely they might get pregnant ever.
> What the people who designed NPS did, I am sure (meaning I'm speculating, but giving the strongest possible interpretation), is measure some responses and compare it to the number of actual referrals, then drew the lines where the referral rates cross from negative growth to neutral grow, and from neutral growth to positive growth.
They didn't do anything like that.
> And it seems plausible that people who give a score of 6 or less won't end up referring anyone, on average.
It does seem plausible. It isn't validated by any science, but it's certainly plausible. (Like the earth is plausibly flat.)
> Since NPS is an indirect growth metric, the better answer may be to simply measure your growth directly.
Now I'm really confused by your statements. I just read the link to the original source that you posted on hbr.org. What Reichheld described is exactly what I said above, he correlated survey responses against actual growth rates, and drew the lines between negative and positive growth rates. Not only that, he asked the question multiple different ways, and found out which question statistically landed the most accurate answers.
Why are you claiming they didn't do that? Are you saying the article is lying about the data they used to come up with NPS?
I'm not defending NPS. But your first and biggest argument in the article is unscientific and anti-statistical. You're making an emotional case that it looks weird because there are thresholds. You said "For some reason, NPS thinks that a 6 should be equal to a 0." and "Make that data set to be all nines: 9, 9, 9, 9, 9, 9, 9, 9, 9, and 9. The average is 9. And miraculously, NPS is 100!" Your reasoning here is faulty. You threw in sarcastic irrelevant comments about bonuses to make the idea of getting your NPS score wrong feel like it'll do damage.
Instead of investigating the possibly legitimate reason NPS people might be doing this, you put up a straw man argument about all respondents giving the same score. The likelihood of all respondents in a large survey all giving 8's is very, very close to zero. The likelihood of your NPS score suddenly flipping from 0 to 100 is very, very close to zero.
So you got my analogy and suggested an alternative, but you still don't see how probability of referral (or birth) is an indicator of business (or population) growth? You do seem to get it, so I don't understand what you're missing. I'm not sure how to (or if I need to) explain it better.
Polling a bunch of people how likely they are to refer a friend is like sampling the derivative of the growth function you want to estimate. If everyone responds accurately and tells the truth, and they refer people at the rate they said they would, you can use the data to predict your growth.
The fact that NPS puts the negative growth line at 60% says, to me, that they concluded that people inflate their self-reporting referral probabilities.
There is a mapping between what people report, and what they do. NPS might have the mapping wrong, but there is a mapping. I don't expect the NPS mapping to be very accurate, but if it's wrong I'd like to hear why. You haven't explained why it's wrong because you don't seem to understand why it might be right.
> Horrible analogy, but ok. I'd say, if there's any equivalent, it's asking how many people think they are likely they might get pregnant ever.
I don't understand what you're arguing (or why), you're splitting a very fine hair here, the difference between what you suggested and what I said is subtle at best. The NPS question is how probable are you to recommend this service to a friend. Someone with a low probability is likely to recommend to 0 friends. Someone with a (self reported) medium probability may be likely to refer 1 friend. Someone with a high probability may be likely to recommend 5 friends.
Asking a yes/no how likely one is to ever get pregnant would be a worse proxy for population growth than asking how many pregnancies you expect in your lifetime. The NPS question doesn't exactly ask either of those, it can be interpreted either way.
> They didn't do anything like that.
So what did they do? Your post ignores that question and argues it's purely bogus. I don't even know what they did, and I don't buy that NPS is pure fiction with nothing at all to back it. I totally would buy that the NPS scale was based on a small sample, and that it doesn't fit many companies very well.
> Like the earth is plausibly flat.
Not sure I get where the snark here is coming from. There exists an average response to this survey that is between 1 and 10 where below that number, statistically people will not refer anyone. What is that number? Why does 6 seem as plausible to you as the earth being flat?
Well, let's let the author try to convince you that NPS is a harmful, horrible number to summarize a company's performance on.
He would tell you that NPS is only like earnings or revenues if we allowed either to have 50% or more of their data filled with arbitrary numbers, not audited data collected from state-licensed specialists who would lose their job if it was discovered the data was manufactured whole cloth.
The author would also tell you that NPS is easily gamed and there's no checking on whether that is or not. He wrote extensively in the article the various techniques that folks can game the numbers. If this is a number reported to shareholders, shareholders should insist (No, Demand!) that the numbers be corroborated by a neutral third party that will accept liability for any errors. (No surety insurer will guarantee such a liability, for the risk of error or misrepresentation is way too high.)
As you stated, most use follow-up questions to get a richer understanding of the customer. What the author would tell you is that it's clear the NPS recommendation question taints those followup questions and diminishes their validity and inherent value. If the true goal is to learn a richer understanding of customer experience, there are many better ways to achieve it.
In other words, the author believes if executives want a simple metric that is better than NPS, a random number generator is the fastest and cheapest way to achieve it. Why bother with customers at all, if all you're going to do is squander your interaction with them on such a foolish metric.
>if executives want a simple metric that is better than NPS, a random number generator is the fastest and cheapest way to achieve it
>In fact, NPS measures nothing in particular.
These types of sweeping statements aren't a helpful way for the author to advocate his broader point.
The strengths of the article lie in the more detailed points which bring to light some great examples of how NPS, and surveys in general are mis-used (and some examples that don't actually manifest themselves as real problems in daily use very often).
My thoughts on the examples chosen above:
>50% or more of their data filled with arbitrary numbers
I'm not sure what this refers to, but the way the NPS equation works, every respondent's score matters and mathematically impacts the overall NPS (each one feeds in as either a promoter, neutral, or detractor). Some of the author's own recommended questions include only 3 answer inputs also.
>Not collected from state-licensed specialists
Almost all operational data collected by companies for management reporting is not collected by state-licensed specialists, but is still useful.
>Easily gamed
All survey questions could be gamed in the ways similar to the article examples. A good executive will make sure the survey is asked in the same way of his own organization as of those of peers, and in the same manner over time. He or she won't let agents do things like cherry pick which customers to survey, otherwise the money he invests in the survey wont' actually help him run his company.
>Shareholders should insist the numbers be corroborated by a neutral third party
All survey questions can be tainted by preceding questions. When writing a survey it is fairly straightforward to A/B test the order to make sure this isn't a major factor.
> For some reason, NPS thinks that a 6 should be equal to a 0. Nobody else thinks this. Remember, if you worked at a company like Intuit, all that hard work to get everyone to move from a 0 to a 6 would not be rewarded. Your executive would not get their bonus. It’s as if you didn’t do anything.
This seems perfectly reasonable to me. Outcomes matter -- not effort -- and reaching 6 is not the outcome NPS wants.
Separately, the distribution will never be that narrow in practice. Once the highest rater reaches 7, NPS will start improving. The author even states herself that the input has noise, so the "everyone's a 6" argument is a straw man.
> let's let the author try to convince you that NPS is a harmful, horrible number to summarize a company's performance on.
None of your arguments here are based on data. Do you have some evidence that a measured NPS score proved that the metric is bad? The WP link you posted to criticisms is all arguing relative merits. None of them are particularly strongly opposed, and none claimed that NPS doesn't work.
> The author would also tell you that NPS is easily gamed
Do you have data showing NPS scores being gamed?
Easily gamed and actually gamed are two completely different things. Having tried to measure NPS before, I found that 0 people appeared to be gaming the system, my customers told me honestly that my product was mediocre.
To suspect that the polls are being gamed, you assume there's something in it for the respondent, right? What benefit do you think there is for respondents to answer dishonestly?
> In other words, the author believes if executives want a simple metric that is better than NPS, a random number generator is the fastest and cheapest way to achieve it.
I hate to say it, but this kind of hyperbolic statement is having the opposite of the intended affect, it's reflecting on the author.
> Well, let's let the author try to convince you that NPS is a harmful, horrible number to summarize a company's performance on.
This is not how it's used in practice. Meaning: No company measures performance based solely on an NPS metric. NPS is one data point among many used to measure company performance.
There's no more sense in demonizing NPS as there is in worshipping it.
It wasn't simple on the back-end. Your estimate of "man-weeks" is off by an order of magnitude, because of the back-end system complexity.
As the article states, it started with the button, which, as you quite rightly point out, dominoed into a lot of changes and thinking about edge cases that didn't exist before. The point I was trying to make was that it started with the button.
The big story here isn't that adding this particular button will yield $300m in revenue. The story was that, by watching users, we saw an opportunity to reap $300m. And we took it and it worked.
The original draft had more detail. As did the backstory piece we wrote. Editors cut it down for page count. (It was originally a foreword to a book on web form design, which was all about the buttons and the fields.)
The important thing here isn't that a company implemented guest checkout. It's that when they did it, because we could see the problem in our research, they found $300m in revenue. Guest checkout won't work for everyone, but doing research like this likely will.
If it's a retainer, then they pay you up front for a certain number of hours you'll be available. This can be at a higher fee than the other work.
Another alternative is to just have an on-demand hourly rate that is higher than your normal rate. You could have pre-scheduled "office hours" at the regular rate, but if they want you at other times, they get to decide to pay you a little more.
There goes the Dow Jones average.