AI method for customer research — Aakash Gupta talk
What the video covers
Caitlin Sullivan, one of the leading practitioners in AI-assisted user research, joins Aakash Gupta’s Product Growth podcast to demonstrate how to use Claude for rigorous customer research analysis. The episode includes live demos of both survey analysis and interview analysis workflows, with step-by-step prompting strategies and a focus on avoiding the hallucination problems that have made many researchers skeptical of AI tools.
Who it’s for
Product managers and UX researchers who run customer interviews, surveys, or any form of qualitative analysis and want to integrate AI without sacrificing rigor. The workflow is practical and immediately applicable, with exact prompts shown and explained.
Key takeaways
-
Replicate the human process, do not shortcut it. The core principle is that good AI analysis mirrors how experienced researchers work: read through all the data first, build a codebook, analyze each participant individually, then synthesize across participants. Sullivan argues that the researchers getting poor results from AI are the ones who skip steps, asking the model to “summarize all interviews” in a single prompt rather than walking through the structured process.
-
Per-participant analysis is essential. Instead of throwing all interview transcripts at the model at once, Sullivan’s workflow analyzes each participant separately, extracting themes, quotes, and contradictions. This prevents the model from averaging across participants and losing the specific insights that make qualitative research valuable.
-
Verification through contradiction checking changes the output quality. After the initial analysis pass, Sullivan runs a dedicated verification step where the model explicitly looks for contradictions between what participants say and what they do, between different participants, and between the data and assumptions. This step consistently surfaces insights that a single-pass analysis would miss.
-
For surveys, code first, then let AI analyze. Sullivan insists that open-ended survey responses need human coding before AI analysis. She demonstrates how to build a codebook from a sample of responses, apply it consistently, and then use AI to identify patterns within the coded data.
-
Claude is the preferred model for analysis work. Sullivan explains her choice based on evaluation: Claude produces more thorough, more detailed analysis by default compared to other models, requiring fewer correction prompts to reach the level of analytical depth she needed.
Worth watching if…
You run customer interviews or surveys regularly and want a specific, tested workflow for using AI to cut analysis time without the risk of hallucinated insights making it into your product decisions.