Skip to content
Prompt

AI prompts for the Kano Model: feature lists, question pairs, interpretation

Ready-to-use AI prompts for Kano Model studies — generate feature lists and question pairs, review wording, interpret category results, compare segments.

How to use

Copy and paste into your AI assistant chat

These prompts help product teams use AI to design Kano Model studies and interpret the results. Replace [bracketed placeholders] with your specifics before pasting into ChatGPT, Claude, or another LLM.

Prompt 1: Generate a candidate feature list and Kano question pairs

I am running a Kano Model study to prioritize features for [product description]. The decision the study will inform: [specific decision].

Target audience: [user type, segment, role]

Please:
1. Generate 12-15 candidate features to test, each phrased as a single user-benefit statement of 8-15 words
2. For each feature, write the Kano functional question ("How do you feel if you have...") and the dysfunctional question ("How do you feel if you do not have...")
3. For the dysfunctional question, describe the absence of the benefit, not the polar opposite (e.g., "if some files take longer than 10 seconds to upload" rather than "if files take longer than 10 seconds")
4. Avoid technical jargon — phrase everything in terms of user benefit
5. Group the features into 3-4 themes so I can spot gaps in the list
6. Add the standard 5-point response scale to each question pair: "I like it / I expect it / I am neutral / I can tolerate it / I dislike it"

Prompt 2: Review existing Kano question pairs for problems

Here are the Kano question pairs for an upcoming feature prioritization study on [product]:

[Paste the 8-15 features with their functional and dysfunctional questions]

Please:
1. Flag any dysfunctional question that is the polar opposite of its functional question rather than describing the absence
2. Flag any question that mixes the user benefit with the technical implementation
3. Flag any pair where the wording is inconsistent in tone or length compared to other pairs
4. Suggest a tightened rewrite for each flagged item
5. Identify any feature where I should consider splitting it into multiple smaller questions
6. Suggest 2-3 features that might be missing from the list given the goal of [research goal]

Prompt 3: Interpret Kano results

Here are the categorized results from a Kano study with [N] respondents on [topic]:

For each feature, I have:
- Feature name and description
- Dominant Kano category (Must-be, Performance, Attractive, Indifferent, Reverse, Questionable)
- Percentage of respondents in each category
- Average importance score (1-9 scale)
- Segment splits if applicable

[Paste data]

Decision context: [the specific roadmap decision the study was designed to inform]

Please:
1. Group features into action buckets: "must build now" (Must-be), "build for differentiation" (Performance with high importance), "candidates for delight" (Attractive with high importance), "defer" (Indifferent), "drop or invert" (Reverse)
2. For each Performance and Attractive feature, estimate how long it might take before competitors catch up and the category shifts toward Must-be
3. Highlight any feature with a high Questionable rate — that suggests the question wording was unclear
4. Flag any feature where the dominant category is close to a runner-up (within 10 percentage points), because the assignment is unstable
5. Draft a one-paragraph executive summary tying the findings to the decision context
6. List the top 3-5 features the team should build first and the bottom 3-5 they should drop

Prompt 4: Compare Kano results across segments

I have Kano category assignments for the same feature list across [number] customer segments:

Feature list: [list]
Segment A categories: [feature: category, importance score]
Segment B categories: [feature: category, importance score]
Segment C categories: [feature: category, importance score]

Please:
1. Identify features where all segments agree on the category (consensus features)
2. Identify features where one segment classifies as Must-be while another classifies as Indifferent (the largest possible divergence)
3. For each divergent feature, suggest what about the segments might explain the difference
4. Recommend whether the product should ship the feature for one segment, build separate experiences, or look for a unified design
5. Flag any feature that classifies as Reverse for any segment — that segment actively prefers the opposite
6. Suggest follow-up qualitative research questions to validate the largest segment divergences