Skip to content
Prompt

AI prompts for UX benchmarking: measurement plans, data analysis, and reports

Ready-to-use AI prompts for planning UX benchmarking studies, analyzing metrics data, generating stakeholder reports, and writing task scenarios.

How to use

Copy and paste into your AI assistant chat

These prompts help UX researchers use AI tools at every stage of a benchmarking study — from building the measurement plan and writing task scenarios to analyzing data and drafting stakeholder reports. Each prompt includes placeholders in [brackets] that you should replace with your project-specific details before pasting into an LLM.

Prompt 1: Build a benchmarking measurement plan

I am planning a UX benchmarking study for [product name/type].

Context:
- Product description: [brief description of the product and its primary user base]
- Comparison type: [previous version / competitor / industry standard / stakeholder target]
- Number of tasks to benchmark: [5-10]
- Available budget for participants: [number]

Based on this context, create a measurement plan that includes:
1. A recommended set of metrics covering effectiveness, efficiency, and satisfaction (with specific instrument names like SEQ, SUS, UMUX-Lite)
2. For each metric, explain what it measures, how to collect it, and how to calculate it
3. A recommended sample size with the calculation rationale
4. A suggested task structure (how many tasks, approximate session length, task order considerations)
5. A list of potential guardrail metrics to watch for unintended effects

Format the plan as a structured document I can share with stakeholders.

Prompt 2: Analyze benchmarking data and generate comparison report

I have completed a UX benchmarking study. Here is the raw data:

[Paste or attach dataset with columns: participant_id, task_number, task_success (1/0), time_on_task_seconds, seq_score (1-7), sus_score (if applicable), segment_info]

Previous baseline data (if available): [paste previous round metrics or state "this is the first round"]

Please analyze this data and produce:
1. Per-task metrics: task success rate with 95% confidence interval, geometric mean time on task, mean SEQ score
2. Overall metrics: aggregate success rate, overall SUS or UMUX-Lite score with interpretation
3. Comparison to baseline: for each metric, calculate the change, run a significance test, and state whether the difference is statistically significant at the 95% level
4. Segment analysis: break down all metrics by [mobile/desktop] and [novice/expert] segments
5. A "What, So What, Now What" summary for each task highlighting the most important findings

Flag any data quality issues you notice (outliers, suspicious patterns, insufficient sample sizes for segments).

Prompt 3: Draft a stakeholder-ready benchmarking report

I have the following benchmarking results that I need to present to [executive team / product leadership / design team]:

[Paste the analyzed metrics, comparisons, and segment breakdowns]

Product context: [what the product does, what redesign or changes were made since the last benchmark, the team's UX goals for this period]

Write a benchmarking report that includes:
1. Executive summary (3-5 sentences covering the headline finding, the overall trend, and the top recommendation)
2. Methodology section (study type, participant count and profile, tasks tested, metrics collected, tools used)
3. Results section organized by task, each using the "What / So What / Now What" framework
4. Competitive comparison table (if applicable)
5. Trend charts description (describe what the charts should show; I will create them separately)
6. Prioritized recommendations ranked by the gap between current performance and target
7. Appendix with full metric tables

Tone: clear, evidence-based, accessible to non-researchers. Avoid jargon without explanation.

Prompt 4: Generate task scenarios for a benchmarking study

I need to create task scenarios for a UX benchmarking study of [product name/type].

Product context:
- What the product does: [description]
- Primary user personas: [list with brief descriptions]
- Key workflows: [list the main things users do in the product]
- Known problem areas from prior research: [list if available]
- Analytics data on most-used features: [list if available]

Requirements:
- Generate [5-10] task scenarios suitable for an unmoderated remote benchmarking study
- Each task must have: a realistic user scenario (1-2 sentences of context), a clear instruction, a defined starting point, and an observable success criterion
- Tasks should cover the most critical user workflows, not edge cases
- Order tasks from easiest to hardest to reduce early abandonment
- Avoid leading language that hints at the correct path

For each task, also suggest: the expected completion time for an experienced user, the most likely failure points, and which metrics are most relevant to track.