In a recent sprint review we were asked how our findings, which were based on a relatively small number of customer conversations, could be meaningful. Were they statistically significant?
I’d got used to our stakeholders being familiar with the background to qualitative research and how we don’t try to quantify it as such. And that the selection approach / recruitment matrix mean that we can have confidence in the insights. However staff had come and gone and so it was a good reminder to address the common concern that a survey would have been better and more statistically significant.
To address the concern we touched on the draw backs of surveys in terms of being sufficiently well designed, depth of understanding and subjectivity. Example – rate our service from 1-10, with a follow up question of why did you pick that rating? If a customer answers “efficient service” what does efficient really mean? Obviously quick and easy right? Well maybe not. I’ve had cases in more in depth conversations where I asked “Can you tell me a bit more about what efficient meant to you?” and the answer was not what most of us would expect! A survey can you give you breadth but not necessarily depth of understanding. And it’s unlikely to tell you why efficiency is important to customer X.
We also talked about some of the science behind qualitative research: getting your recruitment matrix right, and the concept of saturation and diminishing returns after speaking to a relatively small number of customers. We felt we were close to that point of diminishing returns. And interestingly we’d already identified and highlighted the key drawback of our selection approach in this situation, which was that we were gaining understanding of customer experience across a reasonably narrow time frame – very much “point in time” insights.
Still it is hard for some to trust the insights from what seems like a small number of conversations. My final persuasion is to ask stakeholders to think about some data we have – e.g. the number of times a customer has logged into a help portal. It’s tempting to say that those logging in most are the ones that need the most help and others are doing great without the need for help. But are they really? Maybe they’re struggling on, tapping a colleague on the shoulder and asking for their help instead. The data (in this case) cannot tell you that. A conversation can.