One thing I love about product management over software engineering is the ability to view projects in more holistic views. I want to understand not just how we should build something but rather, why we are building a product, and who we are building it for. That’s where user research comes into play. I’ve conducted over 30 user research and usability testing sessions during my time with 4 projects in the Microsoft Garage— working with a diverse set of Windows users, cancer researchers, and finally published a research report with Xbox research about accessibility in gaming. More recently, I’ve conducted another series of user research sessions to learn more about user needs and expectations for the docs.microsoft.com platform.
I’ve started becoming more hyperaware of when I walk out of user research sessions feeling like I’ve lost a sense of direction for my project vs. when I leave having gained clarity and my mind is bubbling with ideas. At the end of the day, the former is bound to happen, and sessions like that could very well be why we should do these in the first place. But I’m hoping to reflect on how I can stop being the reason why the former happens.
Here is how I generally approach conducting user research:
Create a user research plan with the goal, hypotheses (optional), research details etc.
No leading questions
For example, rather than asking “Do you have any problems with x?”, which implies that a user has a poor relationship with x. Instead, ask “Tell me about your experience with x?” so they get to choose how they feel about it, and then follow it up with why.
Tie numbers to questions
Especially for more qualitative studies, tie number scales to questions (i.e. “On a scale of 1 to 5, with 1 being very poor to 5 being excellent— how do you feel about x?”) to quantify sentiments of responses after the interviews. It can be difficult to differentiate a “decently good” to a “fairly well”, but it is easy to know the difference between a rating of “2/5” and a “4/5”. Of course, scales are relative as well, but they are slightly more empirical.
Let them tell me a story
It is sometimes helpful to leave questions intentionally broad so users have the freedom to answer with anything they want. This can often reveal unidentified flaws in a product, opening up my eyes to not just to a solution I like, but rather, the solution we need.
Magic wand question
Very much like the previous point— give users the option to describe their ideal experience with x if they had a magic wand and anything is possible. This helps me understand what the desired user journey could look like, without constraining them to my own bias / questions. Although remember, users are experts of the problem space, but not necessarily the solution space.
Create a user interview guide
The user research plan is helpful for outlining everything I hope to get through in an interview, but the truth is, it’s difficult to keep referring to a sheet of paper when I want to foster an organic conversation with someone. So making and reading over an interview guide (a simplified version of the research plan with bold highlighted points), allows me to glance at my cheat sheet during the interview and know exactly where I am in terms of progress.
Don’t interrupt them
It can be difficult to hold myself back from steering conversations “back on topic” when I feel I’m not getting exactly the response I’m “hoping for” from a user, and they start going off on a tangent. But people speak the truth when I let them speak their heart.
Don’t try to fill in awkward silence
Sometimes user research will feel awkward— and that’s when I generally go rogue and start suggesting answers to users. I fall straight into the trap of asking leading questions, in hopes that they don’t feel bad. But after a terrible interview where I basically spoon-fed a participant all the answers, my mentor gave me the advice of just letting awkward silence happen. This worked surprisingly well. People feel obligated to come up with answers (and often even more innovative ones) because of the uncomfortable dead air.
I find it helpful to conduct interviews with another person so that one can ask questions, while the other focuses on note taking. This time around, we did our user research through usertesting.com which meant that I didn’t have to take notes at all, and could review the recorded interviews after each one. It also gives the flexibility of annotating the video to mark down key points.
I create a summary report with every interview I conducted based on (loosely depending on context) the following points:
Anonymous user ID (remove PII for confidentiality)
What they currently do
What problems they have
What their wants are
Other interesting observations
User research report
After all of my user research sessions, I sit down and draw themes amongst my observations and generate recommendations / actionable items out of my findings. This includes reviewing my hypothesis.
It’s important to reflect on and share my learnings so that there is transparency in the work that I’m doing, and the team should be aware of concerns that users have. This facilitates an environment where teammates share and learn from each other’s work.
Don’t let user research be the be-all and end-all
It’s inevitable to have anomalies in user research findings, and that’s okay. It doesn’t mean the research should be scrapped or augmented to prove the initial hypotheses. Rather, the holistic result of the research can be used to help drive part of the decision making in creating a solution that users will love, but it should be coupled with data and metrics to ensure that a solution doesn’t just satisfy the needs of the small sample size of users that were interviewed.
Huge thank you to Sara Lerner, Den Delimarschi, Horyun Song and Melissa Boone for guiding me through all of this. I still have lots to learn. 💖