I’m a researcher. That means it’s part of my day-to-day job to understand the differences between the types of research I can draw on and the inherent value they each bring to your application modernization project. But I know that’s not the case for everyone.
Throughout my career, I’ve heard from various coworkers and clients that simply don’t understand the value of different types of research. Sometimes the names of various research methods get misused, or there are multiple terms for the same thing, which just adds to the confusion. To try to cut through the jargon and add some clarity, I wanted to put together a brief explainer that explores some of the work we do for our clients and the research styles we use in the process.
The Difference Between Qualitative and Quantitative Research Methods
Most people, I believe, understand the basic difference between qualitative and quantitative research. Quantitative methods capture things that can be measured: survey feedback, card sorts, website analytics and the like. Qualitative feedback, on the other hand, is not measurable in terms of numbers—often written or spoken, the feedback is based on thoughts, opinions and feelings that are difficult to quantify. It comes down to numbers versus words, essentially.
What I think is confusing for some people is the purpose of each research method and how they play together. People like and understand numbers—they’re faster to decipher and easier to compare—so they might naturally lean towards quantitative methods like surveys or analytics. And those definitely have value in certain applications.
But quantitative methods will never answer why something is happening. If your analytics are showing that 50% of your users are dropping off at Step 2 of a four-step sales process, for example, adding more analytics to track your site won’t answer why they’re doing that. And it will not tell you how to fix the problem either. Instead, you have to speak directly to people to understand what the blocker is. And that requires qualitative methods that dig into the “why” through conversation and observation.
On the opposite side of the equation, if we uncover a problem through a qualitative method, a quantitative tool like analytics can help us understand how big of an impact that issue has. If a single tester in a usability test misses a call to action, for example, we can measure the hits to that button to see if it’s a common, measurable pattern with our site users or just a fluke on that one test.
For the purpose of this post, let’s focus on the qualitative research methods we draw on.
The Qualitative Methods We Use
As a qualitative researcher, I tend to lean more toward the investigative side of the research equation. That involves talking to people and getting a sense of what they want or expect, and why something is happening the way that it is. Here are the three most common “qual” methods we use.
This one might seem a little basic. Everyone knows what an interview is, right? It’s one person talking to another person, asking them questions from a list.
Interviews can be interpreted as stiff, structured back and forths that follow a strict script, but I like to treat interviews as just ... talking to people. Sure, I have some key questions I plan to hit on, but letting people talk and follow different paths of conversation can uncover problems we didn’t even know existed. Good interviewing is a skill that takes time and effort to develop—it means being able to ask strong follow-up questions and knowing when to circle back to connections brought up earlier in the conversation. I like to pair my interviews with task walk-throughs where people can click through a client or competitor site and talk about their experience and how it relates to their overall journey.
Great insights do get uncovered from simply talking to people. I once did a research project for a company looking to provide mentorship to women in technology. To prepare, I read a lot of reports and surveys about female mentorship to get an idea of what women needed. But it wasn’t until I started talking to women in the field that a key insight was uncovered that I hadn’t come across yet: a lot of women wanted to know how to deal with maternity leaves interrupting their career paths. It wasn’t a topic that was widely discussed, I hadn’t seen it in the quantitative data I reviewed, and it was a big gap we wouldn’t have accommodated for if it wasn’t for those interviews.
Usability testing is a way to see how easy a product is to use by testing it with target user groups. We want to make sure the experience our designers have created meets their needs and is understandable given the context they’ll use it in. Usability testing can be quantitative or qualitative, but most often we deploy it as a qualitative method. We generally test with five (yes five!) testers per user group. This might not seem like a lot, but it’s been proven that this number gets us the right amount of feedback for the time investment.
Post-COVID, we generally conduct these tests remotely using a UX research platform. These tests can be moderated, which means the researcher is on a video call with the tester, communicating with them. Or it can be unmoderated, where the tester is going through the tasks on their own and speaking about their experiences out loud. We get the richest data from moderated tests, since they allow us to ask follow-up questions and dig deeper with our testers. The trade-off is that moderated tests require more resources, since you need to factor in researcher time.
Field studies go by a lot of names, which can be perplexing. Sometimes they’re referred to as adaptive interviews, but they can also be called direct observation or contextual inquiry too. In anthropology, field studies are called ethnographic research, a term I’ve also heard used in some UX circles, though it doesn’t seem to be applied in the same manner as in academic research.
To keep it simple, we’ll stick with the term field study.
Field studies are most useful if the end user’s context has a big impact on their usage of a website or application. If we’re designing an interface for workers on a construction site, this is an environment where the lighting, sounds and the PPE the worker is wearing could have a big impact on their use of a phone or tablet. Or if we're designing a transit app for riders to see when a bus is coming, the weather, their sense of urgency or even the connectivity of their physical location can change our design requirements.
A field study can simply involve going to these places and observing people as a fly on the wall, to see what barriers they’re facing or what shortcuts they might take to get around current technological impediments. But for some studies it will be more effective to have more contact with these users, perhaps including interview questions to understand more about why they do things the way they do.
It's important to start projects off on the right foot with input from your users. Best practices can only take us so far—there will be user needs that are unique for every product. I've been doing user research for a long time and it's extremely rare that we won't encounter an unexpected, surprising insight from a round of qualitative research.
Qualitative research can help to provide the human context behind your user base, rather than just the demographics or statistics of their usage. After all, we build products for people to use, so we need to understand those people and why they use your application. Building an application without qualitative inputs is like building a house with half the blueprints missing—which means we might have to go back to fix things after it's finished (which ends up costing a lot more!).
And while it may all sound easy enough, if you don’t follow proper methodology you can miss out on insights that will drive business value. And then there are the outputs of this qualitative research. For usability testing, we’re directly giving feedback on prototypes. But for interviews and field studies, we use that data to build resources that help guide an entire application modernization project.,
I hope these explanations offer a peek behind the curtain on the UX work we do and help you see the value in our methods and understand the approach we take. If there are any user research or UX design terms that you find confusing or don’t understand, just reach out and let us clarify things!
Ready to get started with your application modernization journey? Schedule a call with our experts today to kickstart your transformation!