Misleading Customer Experience Metrics

by Admin




by Ian Everdell


by Ian Everdell
http://www.mediative.com

Have you ever wondered whether the data you’re capturing in your feedback survey is actually representative of customer sentiment?

Have you ever filled out a customer or user feedback survey (you know, those ones that pop up when you visit a site and ask if you’d like to leave feedback when you’re done) that you felt didn’t accurately capture your feelings about the site?


Let me give you an example – I just finished taking a test of an iPhone app on usertesting.com. The test was focused specifically on one tiny feature of the app that I thought was totally useless and tangential to the purpose of the app. I had to turn this “feature” on and off and comment on whether I thought it was something that I would use in my day-to-day interaction with the app.

After the test there were some follow up questions: did I like the feature, would I use it, and net promoter score (NPS). If you’re not familiar with NPS, it’s one question:

“How likely are you to recommend this company/service/product/app to a friend or colleague?”

The respondent answers on a scale from 0 (not at all likely) to 10 (extremely likely). Then you subtract the percentage of respondents who answered 0 to 6 (called “detractors”) from the percentage of respondents who answered 9 or 10 (called “promoters”) to get a score between -100 (all detractors) and 100 (all promoters).

The problem with asking me this question at the end of the user test was that I had spent 10 minutes focusing on one specific, tiny, and useless (in my opinion) feature of the app. The feature had nothing to do with the actual functionality of the app or how I would use it from day to day. I have never used the app before, so I have no idea how the app would actually perform. So asking me whether I would recommend it was completely pointless – how could I, with any confidence, say whether or not I would recommend the app based on the few minutes I’d spent commenting on this one useless feature?

Maybe they were trying to find out whether I would recommend the app based on that one feature, but if that was the case, the question should have been worded that way. And, perhaps more importantly, is that really a useful thing to measure – whether I would recommend an app based on that one feature?

So I gave them a 4/10. That seems pretty arbitrary to me. Will they get any useful feedback from the NPS question after that test? Probably not. Will they base business decisions on the NPS from that test? Let’s hope not.


About the Author
Ian is the Manager of User Experience & Research at Mediative, helping clients optimize the online experience for their potential customers. He manages several key accounts and leads the company’s ongoing research projects to better understand search, buyer behaviour, mobile, and user experience. With a background in neuroscience, web design and eye tracking, Ian brings an invaluable knowledge of human behaviour and human-computer interaction to his position at Mediative. He plays a key role in many of the company’s SEO projects and lends his expertise to the strategy development of both PPC and display advertising projects. Ultimately, Ian believes that success in digital marketing comes down to two things: being found and being good. So he works with clients to make sure they tick both boxes and establish a successful and lasting presence online.



News Categories

Ads

Ads

Subscribe

RSS Atom