Skip to content

Unmoderated usability testing checklist: before, during, and after the study

This checklist covers the full unmoderated usability testing process from setup through reporting. Pay special attention to the pilot step and the data quality checks — skipping either is the most common reason unmoderated test data ends up unusable.

Before the study

  • Define 3-5 tasks covering the most critical user journeys
  • Write task instructions in plain language — self-contained, no interface terms, no ambiguity
  • Define success criteria for each task (specific screen, URL, or action the tool can detect)
  • Set benchmarks: what completion rate, time-on-task, and SEQ scores constitute pass/fail
  • Configure the study in the testing tool (Maze, UserTesting, Lyssna, UXtweak)
  • Add post-task questions (SEQ at minimum) and a post-study questionnaire (SUS)
  • Set device, browser, and language requirements if applicable
  • Run a pilot with 3-5 participants — fix task wording, prototype paths, and tool configuration
  • Recruit 30-50 participants from the target audience (over-recruit by 15-20%)

During the study

  • Monitor the first 5-10 completions for quality (flag suspicious sessions)
  • Check that success criteria are triggering correctly for completed tasks
  • Remove low-quality sessions in real time and recruit replacements
  • Track dropout rates — if more than 30% abandon, the study may be too long or confusing

After the study

  • Remove incomplete and low-quality sessions from the dataset
  • Calculate completion rates, median time-on-task, and SEQ averages per task
  • Generate click path visualizations and heatmaps for failing tasks
  • Code open-ended responses into themes with frequency counts
  • Calculate the overall SUS score and convert to a letter grade
  • If comparing variants: run statistical comparison and note significant differences
  • Write the report: task summary table + problem areas + recommendations
  • Present findings with traffic-light task table and click path visualizations
  • Plan follow-up: moderated testing on tasks that failed (to understand why), or redesign and retest