Maze: The Future of User Research Report 2026
Maze publishes an annual state-of-the-industry survey on user research practices. The 2026 edition is based on responses from nearly 500 practitioners — 44% UX and product researchers, 26% designers, 9% marketers — collected between December 23, 2025 and January 13, 2026. Respondents came primarily from Europe (34%) and North America (31%), spanning companies from startups to large enterprises. Expert contributions came from practitioners at Twilio, 1Password, Adobe, and Mozilla.
What the numbers show
AI use in research workflows grew substantially in a single year. In 2026, 69% of respondents use AI in at least some of their research projects, up 19 percentage points from the year before. Teams report faster turnaround times (63%), improved team efficiency (60%), and more organized workflows (56%) as the primary benefits. The tasks most commonly handed to AI are transcription, synthesis, and generating research questions — all high-volume, time-consuming work that historically consumed the hours researchers might otherwise spend on interpretation.
The number of organizations where research is essential to all levels of business strategy nearly tripled in a single year, from 8% in 2025 to 22% in 2026. The report frames this as structural rather than cosmetic: research teams are being pulled into conversations earlier, used to inform long-term strategy rather than to validate decisions already made. Where researchers were once called in after the direction was set, they are now increasingly involved in setting it.
The core tension
AI adoption is not uniformly positive in the report’s framing. As research demand rises — driven in part by AI’s lower barrier to running a study — non-researchers are conducting more of their own work. Product managers (39%), market researchers (35%), and marketers (23%) are now regularly producing insights. The risk is not that more people can run research; it is that rigor becomes distributed without being maintained. More studies without shared frameworks, centralized repositories, or quality standards produces noise rather than better decisions.
The report identifies building shared infrastructure — centralized insights, quality standards, continuous enablement — as the antidote rather than slowing adoption. The teams that are managing AI’s entry well are not just running more studies; they are designing systems of learning that can sustain the pace.
What researchers say they cannot delegate
Respondents were clear about where human involvement remains essential. Interpreting nuance and emotion (82%), ethical decision-making (80%), and framing the right questions (76%) were the most frequently cited areas. These are not incidental capabilities — they are the core of what makes research findings usable by decision-makers. The emerging division of labor is that AI handles execution volume while researchers focus on what requires context and judgment.
Over a third of respondents (35%) believe the researcher role is becoming more strategic; 33% describe it as becoming more blended across teams. The direction of change is consistent: the researcher’s function is expanding beyond delivery into stewardship — setting standards, enabling others to do research, and connecting findings to business decisions.
Who this is useful for: Research managers making decisions about team structure, tooling, and process design as AI changes the division of work. Individual researchers who want survey data on where the profession is heading and what skills will be most valuable. Research operations leads building the infrastructure and standards that prevent distributed research from degrading in quality.