Skip to content
Prompt

AI prompts for MaxDiff: item lists, score interpretation, segments

Ready-to-use AI prompts for MaxDiff studies — generate item lists, review them for overlaps, interpret scores, and compare segment differences.

How to use

Copy and paste into your AI assistant chat

These prompts help researchers use AI to design MaxDiff studies and interpret the results. Replace [bracketed placeholders] with your specifics before pasting into ChatGPT, Claude, or another LLM.

Prompt 1: Generate a candidate item list

I need to run a MaxDiff study to prioritize [features / messages / pain points / value props] for [product or audience description].

Product context: [1-2 sentences about the product, target user, business goal]
Decision the study will inform: [what we will do differently based on the result]
Constraints on the item list: [tone, length, anything to exclude]

Please generate 25-30 candidate items for the MaxDiff list. For each item:
1. Write it as a single self-contained statement of 5-12 words
2. Make sure no two items overlap or describe the same thing in different words
3. Avoid items that are exact opposites of each other
4. Use parallel grammatical structure across all items
5. Group the final list into 4-6 themes so the researcher can spot gaps

Then add a short note flagging any items you think might be confusing, abstract, or hard for a respondent to evaluate without additional context.

Prompt 2: Review an existing item list for problems

Here is the candidate item list for an upcoming MaxDiff study on [topic]:

[Paste the list of 8-30 items]

Please:
1. Flag any pair of items that overlap in meaning (a respondent could pick either as the same thing)
2. Flag any pair of items that are exact opposites (high negative correlation will distort the scores)
3. Flag any item that is significantly longer, shorter, or more abstract than the others
4. Flag any item containing jargon that a typical respondent in [target audience] might not understand
5. Suggest a tightened rewrite for each flagged item
6. Suggest 2-3 candidate items that might be missing from the list given the research goal of [goal]

Prompt 3: Interpret MaxDiff results

Here is the MaxDiff score table from a study with [N] respondents on [topic]:

[Paste table: item name, average utility score, top-3 reach %, segment scores if any]

Decision context: [the specific decision the study was designed to inform]
Randomness threshold: [100 / number of items, e.g., 4% for a 25-item study]

Please:
1. Group items into three buckets: clear winners (well above randomness threshold), clear losers (well below), and indistinguishable from chance
2. Identify the size of the gap between adjacent items in the ranking and flag where there is a large jump (a natural cutoff point)
3. Highlight any item that ranks very differently across segments and explain what that might mean for the decision
4. Recommend the top 3-5 items to act on, the bottom 3-5 items to deprioritize, and the items that need additional research
5. Draft a one-paragraph executive summary that ties the findings to the decision context
6. Flag any caveats or limitations the team should be aware of when defending the result

Prompt 4: Compare MaxDiff segments

I have MaxDiff scores for the same item list across [number] customer segments:

Item list: [list]
Segment A scores: [list with name and score]
Segment B scores: [list with name and score]
Segment C scores: [list with name and score]

Please:
1. Identify the items where the segments rank similarly (consensus items)
2. Identify the items where the segments rank very differently (divergence items)
3. For each divergence item, suggest what might explain the difference (different needs, different lifecycle stage, different use context)
4. Recommend whether the product should optimize for one segment, build separate experiences, or find a compromise
5. Suggest follow-up research questions to validate the largest segment differences