Skip to content

MaxDiff checklist: design, fielding, and analysis for product teams

This checklist covers the full MaxDiff cycle — from defining the decision through fielding the survey to writing the final recommendation. Use it for a single feature-prioritization study or as the backbone for recurring quarterly priority refresh.

Before

  • Write down the specific decision the MaxDiff result will inform
  • Confirm with stakeholders that they will act on the result
  • Draft an item list of 8–30 candidates, all mutually exclusive and parallel in style
  • Pilot the item list with 5 internal users and rewrite anything confusing
  • Choose a survey tool that supports MaxDiff (Conjointly, Sawtooth, Qualtrics, Displayr, OpinionX, SurveyMonkey)
  • Calculate the survey configuration with the formula r·x / n·p = s (target r ≥ 200)
  • Confirm the sample size: 100 minimum for one segment, 100–200 per segment for comparisons
  • Match the sample to the population the decision affects

During

  • Add a 2–3 sentence introduction warning respondents about repetition
  • Limit each set to 3–5 items (4 is standard)
  • Limit total sets to 10–20 per respondent to avoid fatigue
  • Field for 1–2 weeks and monitor completion rate and quality flags daily
  • Watch for straight-lining or speeders and exclude them from the analysis

After

  • Calculate scores using the simple formula (best−worst)/appearances or Hierarchical Bayes
  • Calculate the randomness threshold (100 / number of items)
  • Read the results three ways: average score, top-3 reach, segment differences
  • Group items into clear winners, clear losers, and indistinguishable-from-random
  • Compare segment scores and flag the largest divergences
  • Write the report leading with the decision, not the method notes
  • Schedule 5–8 follow-up interviews with respondents from the divergent segments
  • Document the configuration (items, sets, sample) so the next study can reuse the design