What and Why skills vs How skills

John Cutler asked a great question on Twitter; how do we describe less visible skills like qualitative research in comparison to technical skills like software development?

Initially I was intrigued by the parallels with translation vs interpretation in linguistics. I can see similarities between a software developer translating requirements into code. And a design researcher interpreting customer interviews to help produce the right software requirements.

I also liked a response by Tiffany Chang suggesting that one skill is more concrete and the other more abstract. That resonated with human centered design approaches for me. And the idea of not jumping straight from problem to solution:

HCD abstract concrete understand create quadrants



(ref: https://ssir.org/articles/entry/human_centered_systems_minded_design)

So in some ways perhaps we are really talking about the difference between skills to understand “what” (we need to do), and “why” (we should do it), versus skills in “how” we do it? And unfortunately perhaps it is natural for many people to see and appreciate “how” skills above “what” and “why” skills. When you’re cold, fire lighting skills are more immediately appreciable than skills to uncover that we need to move the tribe to a different valley.

Depth vs breadth

In a recent sprint review we were asked how our findings, which were based on a relatively small number of customer conversations, could be meaningful. Were they statistically significant?

I’d got used to our stakeholders being familiar with the background to qualitative research and how we don’t try to quantify it as such. And that the selection approach / recruitment matrix mean that we can have confidence in the insights. However staff had come and gone and so it was a good reminder to address the common concern that a survey would have been better and more statistically significant.

To address the concern we touched on the draw backs of surveys in terms of being sufficiently well designed, depth of understanding and subjectivity. Example – rate our service from 1-10, with a follow up question of why did you pick that rating? If a customer answers “efficient service” what does efficient really mean? Obviously quick and easy right? Well maybe not. I’ve had cases in more in depth conversations where I asked  “Can you tell me a bit more about what efficient meant to you?” and the answer was not what most of us would expect! A survey can you give you breadth but not necessarily depth of understanding. And it’s unlikely to tell you why efficiency is important to customer X.

We also talked about some of the science behind qualitative research: getting your recruitment matrix right, and the concept of saturation and diminishing returns after speaking to a relatively small number of customers. We felt we were close to that point of diminishing returns. And interestingly we’d already identified and highlighted the key drawback of our selection approach in this situation, which was that we were gaining understanding of customer experience across a reasonably narrow time frame – very much “point in time” insights.

Still it is hard for some to trust the insights from what seems like a small number of conversations. My final persuasion is to ask stakeholders to think about some data we have – e.g. the number of times a customer has logged into a help portal. It’s tempting to say that those logging in most are the ones that need the most help and others are doing great without the need for help. But are they really? Maybe they’re struggling on, tapping a colleague on the shoulder and asking for their help instead. The data (in this case) cannot tell you that. A conversation can.