UX research in the age of LLMs — practical guide
What the article covers
Connor Joyce, drawing on two years of building LLM-based products, presents a framework for how UX research practice adapts to the LLM era. The article introduces a new process for defining quality in AI-generated outputs, using qualitative interviews and surveys to build quality rubrics that guide AI systems.
Context
The article responds to a common anxiety: that AI agents, synthetic users, and deep research tools make UX researchers less necessary. Joyce argues the opposite. As products become cheaper to build and AI appears in nearly every workflow, the question of how much AI a feature should contain becomes a research question that requires human expertise.
Key takeaway
Joyce proposes a three-phase process. First, use qualitative interviews to understand how users judge AI-generated outputs in practice, capturing what they mean when they say something “informs” or “summarizes.” Second, validate those quality themes at scale through surveys that test how consistently the themes matter across users. Third, synthesize the findings into a shared quality rubric that defines each quality factor, illustrates good and poor outputs, and guides both prompt engineering and evaluation. The 80/20 heuristic is practical: a relatively small set of research-derived criteria can account for most of what determines whether an output is useful. The article positions UX researchers as the people best equipped to do this work because they already specialize in translating user needs into actionable system requirements.
Who should read this
Researchers working on LLM-powered products who need a methodology for defining and measuring output quality, and research leaders looking for frameworks that demonstrate how UX research creates value in AI product development.