From what I recall from my UX stats course, you’re not supposed to tell people that anything below 7 is bad. So surveys that say what the scores mean are biasing their results (which would be trash anyway since NPS has so many other issues)
Of course if you used a weird measuring system like that you'd explain it to consumers so they knew a 5 is actually "terrible" and not "average" right? NOPE we just throw it out there. Stupid.
When I worked at a FAANG company in Europe, this principle applied to our internal evaluations in really interesting ways; the company being based in the US, it was assumed that anything less than 5/5 on a team's internal "how's the manager doing?" report is a failing grade.
But my team was ~half Eastern Europeans, who (broadly) had a cultural norm of responding 2/5 or 3/5 when everything was absolutely fine with them. And of course, the scores weren't defined and the means of evaluating them wasn't plainly spelled out anywhere. The result was total confusion.
Comments
I've gotten prompted for NPS from tools where use of the tool *at all* is compulsory and for which I am not the customer or even the primary user.
On principle, my default is to give a 6.
It does what it's intended to do, regardless of whether that is good or not.
If you have an NPS of 0, it's neutral, which is probably bad if you want to grow quickly with the help of your existing users.