Skip to content
Prompt

AI prompts for card sorting: label cleanup, clustering, and segment comparison

Ready-to-use AI prompts for card sorting analysis — standardize group labels, interpret dendrograms, compare segments, and generate card labels from a content inventory.

How to use

Copy and paste into your AI assistant chat

These prompts cover the analytical stages of card sorting where AI saves the most time: cleaning up participant-generated labels, interpreting cluster analysis outputs, comparing results across user segments, and preparing cards for a new study.

Standardize group labels from an open card sort

I ran an open card sort with [N] participants and [M] cards. Participants created their own group labels. Here is the full list of group labels across all participants:

[paste list of all group labels]

Many of these labels are synonyms or near-synonyms. Your job is to:
1. Identify clusters of labels that mean the same thing
2. Propose a single canonical label for each cluster (use the most common or clearest participant term, not a new term)
3. Flag any labels that are ambiguous and could belong to multiple clusters

Format as a table:
Canonical Label | Participant Labels Mapped to It | Frequency (how many participants used a variant of this label) | Ambiguous? (yes/no + explanation if yes)

Rules:
- Do not invent labels that no participant used. Choose from participant vocabulary.
- If two labels are close but not identical in meaning, keep them separate and note the distinction.
- List labels that only one participant used but that do not fit any cluster — these may indicate unique mental models worth investigating.

Interpret a dendrogram and recommend a category structure

I have a dendrogram from a card sorting study with [N] participants and [M] cards about [topic/domain]. The dendrogram shows how cards cluster at different agreement thresholds.

Here are the clusters at the 60% agreement level:
[paste cluster data — which cards group together at 60%]

Here are the clusters at the 70% agreement level:
[paste cluster data at 70%]

Items that did not reach 60% agreement with any group:
[list orphan items]

Based on this data:
1. Recommend a category structure using the 60% threshold as the baseline. For each category, list its items and the most common participant-given label.
2. Identify items that are "borderline" — they cluster at 60% but not at 70%, meaning agreement is moderate. Recommend whether to keep, cross-link, or investigate further.
3. For each orphan item, suggest: (a) which category it might fit with a forced placement, and (b) whether it should be cross-linked from multiple categories.
4. Flag any categories that seem too large (more than 10 items) and suggest sub-categories based on the 70% clustering.
5. Note any categories that seem too small (fewer than 3 items) and suggest whether to merge them.

Present the recommended structure as a navigable hierarchy (max 2 levels).

Compare card sorting results across user segments

I ran the same card sort with two user segments:
- Segment A: [description, e.g., "new customers who signed up in the last 3 months"]
- Segment B: [description, e.g., "power users with 2+ years of activity"]

Here are the similarity matrices for each segment:
Segment A: [paste or describe top groupings]
Segment B: [paste or describe top groupings]

Compare the two segments:
1. SHARED GROUPINGS: Cards that both segments consistently placed together (agreement > 60% in both segments). These represent universal mental models.
2. SEGMENT-SPECIFIC GROUPINGS: Cards that one segment grouped together but the other did not. For each, hypothesize why the segments differ.
3. LABEL DIFFERENCES: Cases where both segments grouped the same cards together but used different labels. Note both labels and recommend which to use (or whether to A/B test).
4. CONFLICTING PLACEMENTS: Cards that Segment A placed in Category X but Segment B placed in Category Y. These items need cross-linking or adaptive navigation.
5. RECOMMENDATION: Should the architecture accommodate both segments with a single structure, or does the data suggest segment-specific navigation? Justify with evidence from the comparison.

Generate card labels from a content inventory

I need to prepare cards for a card sorting study about [product/site description]. Here is our current content inventory:

[paste list of pages, features, or content items — can be a URL sitemap, a spreadsheet export, or a plain list]

Generate a set of 30-50 card labels for the sort:
1. Write each label in plain, user-facing language (no internal jargon, no technical IDs)
2. Each card should represent a single concept — if a page covers multiple topics, split it into separate cards
3. Exclude administrative, legal, or boilerplate pages (privacy policy, terms of service, cookie settings) unless the sort specifically targets those
4. Avoid using identical words across cards (e.g., do not start 5 cards with "Account...")
5. Flag any items that are ambiguous and might need two different card labels to test

Present the final card list as a numbered list with a brief note explaining any items you renamed, split, or excluded.