Erika Hall speaks so much truth in her post On Surveys:
If you are treating a survey like a quantitative input, you can only ask questions that the respondents can be relied on to count. You must be honest about the type of data you are able to collect, or don’t bother.
My first role at eBay, years ago, was as a quantitative user researcher1. We ran surveys to measure satisfaction with different areas of the product over time. If that period taught me anything, it’s that surveys are extremely useful when combined with analytics as well as qualitative user research (triangulation), and pretty useless when looked at in isolation. There just isn’t enough context by itself.
-
One of my early experiences at eBay was getting to work one morning and discovering that Peter Merholz wrote a scathing blog post about a survey I was running. This was my second month on the job, so I was pretty sure I was going to get fired. The worst part of it was that he didn’t have the full context, so his criticism wasn’t even valid. We were doing a controlled experiment where each group saw only one of the images in the survey, and the “likelihood to purchase” question was just a decoy as an introduction. We weren’t trying to get absolute numbers of likelihood to purchase (that would be ridiculous) — we were comparing responses to different pages to figure out what iconography would be best for ratings (stars, bars, or check marks). Subsequent questions were more specific about the ratings aspect. It went all the way up to our VP of Product and my manager had to write an explanation. I was mortified. I still sometimes wake up in the middle of the night in a cold sweat, screaming “survey!!!!!!!!!” ↩