Forrester: Experience Research Platforms Wave, Q1 2026
Forrester’s Q1 2026 Wave evaluation of experience research platforms assesses eight vendors across 31 criteria spanning current offering and strategy. The full report was authored by Senem Guler Biyikli, PhD, and published January 23, 2026. The blog post announcing it appeared January 27. The report itself is behind a paywall ($2,995 for individual purchase, available at no additional cost to Forrester clients), but the announcement blog summarizes the three themes that shaped the evaluation.
What the evaluation covers
Forrester defines experience research platforms as tools companies use to collect qualitative data and weigh it alongside quantitative data in product and service decisions. Vendors are assessed on their ability to enable participant recruitment, facilitate remote research, and support insight synthesis at scale. The 2026 evaluation is the first edition to assess AI-moderated interviews as a distinct capability category.
Three themes emerged across vendor assessments.
Breadth of research needs. XRPs now serve a wider range of roles than researchers alone — designers, CX professionals, marketers, and product managers are all running studies. Vendors that were strong for traditional researcher workflows are often weaker for the adjacent roles, and no single platform fully covers all use cases.
AI expectations beyond summaries. Users have moved past expecting AI to produce high-level summaries. The bar in 2026 is deeper, faster analysis — described in the evaluation as comparable in sophistication to interactions with general-purpose AI tools like ChatGPT. Vendors that deliver AI features primarily as transcript summary or keyword extraction are scoring lower against this criterion.
AI-moderated interviews. This capability generated the most enthusiasm among users in the evaluation, allowing researchers to conduct structured interviews at scale without a live moderator. The feature removes language and timezone barriers that limit traditional research programs, and it enables teams to run more sessions than their headcount would otherwise allow.
Why this matters for research teams
The Wave evaluation is useful primarily as a signal about market direction and vendor priorities. When Forrester invests in a new criterion — AI-moderated interviews appear for the first time in this edition — it reflects that the capability has crossed from experimental to expected. Teams building research infrastructure in 2026 need to assess whether their current platform supports AI-moderated methods, and if not, whether that gap is causing them to skip research they would otherwise run.
The evaluation also surfaces the gap between what platforms offer and what users expect from AI. If a team’s primary experience of AI in their XRP is better transcription, they are likely to feel that the platform is lagging their own use of general AI tools. That gap creates pressure to build fragmented workarounds — exporting transcripts to external LLMs, managing insights across multiple tools — rather than working within a single environment.
Who this is useful for: Research operations managers evaluating or re-evaluating their platform stack. Research leads making the case internally for investment in research tooling. Teams that have not formally assessed their current platform since AI capabilities became mainstream and want a framework for doing so.