Skip to content
Video Nielsen Norman Group Jan 2026

Nielsen Norman Group: Don't outsource analysis to AI

Maria Rosala, Director of Research at Nielsen Norman Group, recorded this five-minute video in January 2026. The argument targets a specific failure mode: researchers who hand qualitative synthesis to an AI tool and treat the output as findings. The video does not argue against AI in research — it argues against removing human judgment from the analysis stage specifically.

Who the video is for: UX researchers and research managers who are experimenting with AI-assisted synthesis and want a structured argument for where human oversight must remain. It is also useful for research operations professionals making decisions about tool adoption and workflow guidelines. The video assumes familiarity with qualitative research practice.

Key takeaways:

  1. Credibility is a professional asset, not a default. When analysis is outsourced to AI, the researcher can no longer credibly defend the findings — they did not generate the interpretation, so they cannot fully account for it. Rosala frames this as professional standing rather than ethics: stakeholders who challenge findings will expose the gap between what was reported and what the researcher can actually defend.

  2. AI-generated analysis is shallow by construction. Language models identify surface-level patterns through statistical association, not through understanding of user context, project history, or research intent. The quality of insights produced without human involvement is structurally limited — not just occasionally inaccurate, but limited in the kind of meaning it can produce.

  3. Critical thinking is not separable from analysis. Analyzing qualitative data is itself a form of learning — researchers develop understanding of users by doing the work of interpretation, not by reviewing a summary. Delegating analysis removes this developmental function alongside the practical one.

  4. Accountability becomes ambiguous. When AI generates the synthesis, it is unclear who is responsible for decisions made on the basis of those findings. Rosala argues this creates organizational risk that extends beyond the research team to the decisions stakeholders make using AI-generated outputs.

The video is accompanied by a companion article and links to related NN/G courses. Its structure — four distinct risks, stated concisely — makes it easy to reference in team discussions about AI adoption policies. The argument is not theoretical: it is addressed at practitioners who already have AI tools available and are deciding how far to go.

Worth watching if: Your team is considering using AI tools to replace thematic analysis after sessions, or if you are a research manager building guidelines for appropriate use of AI in qualitative research workflows.