There’s a version of this conversation that bores me. The one where AI is either going to replace UX researchers entirely or do absolutely nothing of significance. Both positions are wrong, and more importantly, both are incurious. They skip the interesting part, which is the specific, granular question of what actually changes when you have these tools available and what stays stubbornly the same.
I’ve been using AI across various parts of the research process for a while now. Not experimentally. As a working method. And the picture that’s emerged is more textured than the discourse suggests.
What AI is genuinely good at in research is transforming volume into structure. If you have 40 interview transcripts and need to identify thematic patterns, AI can do a first pass in about 20 minutes that would take a junior researcher three days. That’s not nothing. That’s a meaningful compression of time that can be redirected toward analysis, synthesis, and the kind of interpretive thinking that actually requires a human.
It’s also good at surfacing things you might have missed. Not because it’s smarter, but because it doesn’t get tired, doesn’t anchor on the first interesting thing it finds, and doesn’t carry the confirmation bias that comes from sitting in the room with participants. It processes the whole corpus with the same attention. That’s a useful check on your own interpretation.
Where it falls down is exactly where you’d expect. Anything requiring contextual judgment. The moment in an interview where someone says something that doesn’t quite match how they said it. The pause before an answer that tells you more than the answer itself. The thing a participant didn’t say, which turned out to be the most important data point. AI doesn’t catch that. It works with what’s there, not with what’s absent or embodied.
This matters more than people acknowledge. Research is an interpretive practice. The transcript is a residue of an experience. A lot of the meaning lives in the experience itself, in the relationship between researcher and participant, in the social and emotional texture of the conversation. AI is working from the residue. Which means there are whole categories of insight it simply cannot generate.
The more concerning pattern I’ve observed is what happens when teams start treating AI-generated analysis as equivalent to human-generated analysis. The outputs look similar. They’re structured. They use the right language. They feel authoritative. But they’re missing the interpretive layer that comes from someone who was actually in the room, who read the body language, who made a judgment call about what to pursue and what to let pass.
When those outputs feed into product decisions without that distinction being understood, you start making choices based on a flattened version of reality. Technically informed. Experientially shallow.
The research question doesn’t change. What changes is who’s doing which parts of the work and what that means for the quality of the thinking at the end.
The most productive frame I’ve found is to treat AI as a research assistant with extraordinary stamina and no judgment. You wouldn’t let a junior researcher with no judgment make your synthesis decisions. You’d use their work as an input to your own. That’s exactly how AI analysis should sit in the process.
Which brings me to the thing that doesn’t change at all: the quality of the questions.
AI can process whatever you give it. It cannot tell you whether you asked the right things in the first place. It cannot tell you whether your recruitment screener was too narrow, whether your discussion guide was leading, whether the framing of your study missed the actual tension in the user experience. Those are research design decisions that require expertise and judgment that sits entirely outside what the tool can offer.
In fact, AI makes research design more important, not less. When the analysis layer gets faster, the front end of the process becomes the primary place where quality is determined. Garbage in still produces garbage out. It just produces it much faster now.
The researchers I’ve seen thrive in this environment are the ones who’ve doubled down on the parts AI can’t touch. They’re spending more time on study design, more time in the actual conversations, more time on the interpretive synthesis that connects research to strategic direction. They’re using AI to clear the operational overhead so they can go deeper where depth matters.
Research, at its best, is an act of empathy at scale. You’re trying to understand human experience well enough to shape something that serves it. That requires presence, judgment, and a level of interpretive care that no language model is going to replicate.
AI changes the economics of the work. It doesn’t change the nature of it.
Understand that distinction clearly, and you’re already ahead of most of the conversation happening in this space.