These prompts cover the stages of concept testing where AI adds the most value: creating concept stimuli for testing, analyzing participant responses, building discussion guides, and comparing multiple concepts after testing.
Generate concept statement variations
I need to test a product concept with target users. Here is the concept:
Product name: [name]
What it does: [one-sentence description]
Target audience: [who it is for]
Problem it solves: [the problem]
Key differentiator: [what makes it different from alternatives]
Generate 3 variations of a concept statement, each 80-120 words. Each variation should:
1. Start with the problem (not the product name)
2. Describe the solution in plain language (no jargon, no buzzwords)
3. State the key benefit in concrete terms (time saved, money saved, pain removed)
4. End with who it is for
Variation A: Lead with the emotional pain point
Variation B: Lead with the practical/functional problem
Variation C: Lead with a comparison to the current alternative ("Instead of X, you can Y")
Do not use superlatives ("best," "revolutionary," "game-changing"). Write as if explaining to a friend, not writing marketing copy.
Analyze concept test survey responses
I ran a concept test survey with [N] participants evaluating [concept description]. Each participant answered:
1. "In your own words, what does this product do?" (comprehension)
2. "How likely would you be to use this product?" (1-5 scale)
3. "What is the most appealing thing about this concept?" (open-ended)
4. "What concerns do you have about this concept?" (open-ended)
5. "How does this compare to what you currently use for [task]?" (open-ended)
Here are all responses:
[paste raw survey data]
Analyze the data and produce:
1. COMPREHENSION SCORE: What percentage of participants correctly understood the core value proposition? List the most common misunderstandings.
2. DESIRABILITY SCORE: Average rating and distribution. Flag any pattern in who rated high vs. low (if demographic/behavioral data is available).
3. APPEAL THEMES: The top 3-5 themes from "most appealing" responses, with representative quotes and frequency counts.
4. CONCERN THEMES: The top 3-5 themes from "concerns" responses, with representative quotes and frequency counts.
5. COMPETITIVE POSITIONING: How participants perceive this concept vs. their current solution — what it improves and what it lacks.
6. RECOMMENDATION: Based on the data, should the team proceed, iterate (on what specifically), or abandon? Justify with evidence.
Create a discussion guide for a moderated concept test
I am running moderated concept testing sessions for [concept description]. Each session is 30-40 minutes with one participant from our target audience: [audience description].
Create a discussion guide with these sections:
1. INTRODUCTION (3 min): Build rapport, explain the session, set expectations (there are no wrong answers, we are testing the concept not you)
2. CONTEXT (5 min): Questions about the participant's current behavior in [domain]. What they currently do, what tools/methods they use, what frustrates them. These questions establish whether the participant actually has the problem the concept solves.
3. CONCEPT EXPOSURE (5 min): Instructions for presenting the concept stimulus. Include what to say, what to watch for (facial expressions, immediate reactions), and the first question to ask after exposure.
4. COMPREHENSION PROBES (5 min): 3-4 questions that test whether the participant understood the concept WITHOUT leading them. Do not ask "Do you understand?" — ask them to describe it back.
5. DESIRABILITY PROBES (10 min): 5-6 questions about appeal, use cases, willingness to try, willingness to pay, and comparison to current solution. Include follow-up probes for both positive and negative reactions.
6. CONCERNS AND IMPROVEMENTS (5 min): Questions about what would hold them back, what is missing, and what they would change.
7. WRAP-UP (2 min): Final thoughts, anything else.
For each question, include the intent (what you are trying to learn) in [brackets].
Compare multiple concepts from test results
I tested [N] concepts with [M] participants. Each participant evaluated all concepts and provided a forced ranking plus qualitative feedback.
Concept A: [brief description]
Concept B: [brief description]
Concept C: [brief description]
Results:
[paste ranking data, desirability scores, and key qualitative feedback for each concept]
Produce a comparison analysis:
1. RANKING SUMMARY: Which concept won overall? By what margin? Was the winner consistent across participant segments or did different segments prefer different concepts?
2. STRENGTH/WEAKNESS MATRIX: For each concept, list its strongest attribute (what participants praised most) and weakest attribute (what participants criticized most), with supporting quotes.
3. CROSS-CONCEPT INSIGHTS: Are there elements from losing concepts that participants valued and that could be integrated into the winning concept? List specific transferable elements.
4. RISK ASSESSMENT: What is the biggest risk with proceeding with the winning concept? What unanswered questions remain?
5. RECOMMENDATION: Proceed with Concept [X], incorporating [specific elements] from Concept [Y]. Justify with data from the test.