Daniel Mitev: What 100 UX researchers said about AI in 2026
This article is Daniel Mitev’s analysis of a Lyssna survey of 100 UX researchers conducted in December 2025. The survey data is the anchor, but the piece is written as an interpretive essay rather than a report — Mitev uses the numbers to argue a position: automation is absorbing execution, and judgment is becoming unavoidable.
The headline finding is that 88% of researchers already use AI-assisted analysis and synthesis. Transcription, tagging, and first-pass pattern recognition are moving into automation as a baseline expectation. A further 23% specifically use AI to surface patterns across datasets, and 48% expect synthetic participants to influence workflows in the near term.
Mitev distinguishes carefully between what AI handles well and what it does not. AI processes frequency — it surfaces what appears often across sessions. Researchers determine significance — they decide what matters and why, given context that a model cannot access. Synthetic participants are useful for validating known hypotheses but are not appropriate for open-ended discovery work, where the most valuable findings are often the ones no one anticipated.
Two structural tensions run through the article. The first is research ownership: as AI lowers the technical barrier to running a study, non-researchers are increasingly conducting their own. 36% of respondents see this trend accelerating. The concern is not that more people can run studies — it is that rigor becomes distributed without being maintained. Mitev frames the researcher’s future role as stewardship of quality rather than gatekeeping of execution.
The second tension is ROI. 25% of researchers struggle to connect insights to measurable business outcomes. Mitev argues this is the discipline’s most persistent and underaddressed problem. AI can accelerate output volume, but it does not make research more legible to business decision-makers. That translation still requires human skill.
The article closes with five recommendations: learn AI as infrastructure rather than as identity; connect insights to business consequences rather than deliverables; build durable research systems; protect rigor while accelerating pace; and focus on clear thinking over operational speed.
Who this is useful for: Mid-career researchers who want a grounded, data-backed perspective on where the profession is heading — not a trend forecast, but an analysis of what practitioners are already doing and what structural challenges those changes surface. Research managers and operations leads making decisions about team structure as AI tools change what individual researchers spend their time on will also find it directly relevant.