Skip to content

How to use the Kano Model: classify features by satisfaction effect

What is the Kano Model?

The Kano Model is a customer satisfaction framework developed by Japanese researcher Noriaki Kano in 1984 that classifies product features into five categories based on how their presence or absence affects user satisfaction: Must-be, Performance, Attractive, Indifferent, and Reverse. The method works by asking respondents two paired questions about each feature — a functional question (“how do you feel if you had this feature?”) and a dysfunctional question (“how do you feel if you did not have this feature?”) — and mapping the answer pair to a category. By revealing which features are baseline expectations, which scale linearly with investment, and which create unexpected delight, the Kano Model gives product teams a structured way to decide where to spend engineering effort beyond simple ranking.

What question does it answer?

  • Which features are basic expectations that users will be angry about if missing, but indifferent to if present?
  • Which features deliver linear satisfaction — the more we invest, the happier users get?
  • Which features create unexpected delight that users do not yet expect but would love?
  • Which features are users actually indifferent about, no matter how much we invest?
  • Which proposed features would users actually prefer the opposite of (Reverse)?
  • How will the categorization of a feature change over time as competitors catch up and expectations rise?

When to use the Kano Model

  • When the team is debating whether a proposed feature is “table stakes” or a real differentiator and needs evidence rather than opinion.
  • When the product backlog mixes basic expectations with potential delighters and engineering effort is being misallocated.
  • When entering a new market or vertical and the team needs to know which features are baseline before investing in differentiators.
  • When evaluating a list of new feature ideas in early discovery, before any prototype exists.
  • When two stakeholders are arguing about whether a feature is “must-have” or “nice-to-have” — Kano forces the question into structured customer data.
  • When designing a new pricing tier or package and the team needs to know which features anchor the basic plan and which justify the premium.

Not the right method when sample sizes are small (under 100 respondents per segment), since Kano category assignments become unstable. Kano is also a poor fit for incremental refinements of an existing feature — it answers “should we build this at all” better than “how should we tune this parameter.” Finally, Kano categories are not static: a feature that classifies as Attractive today often becomes Performance within a year and Must-be within two as competitors copy it.

What you get (deliverables)

  • Categorized feature list: each tested feature labeled as Must-be, Performance, Attractive, Indifferent, or Reverse.
  • Discrete analysis table: response counts in each Kano category per feature, plus the dominant category and runner-up.
  • Continuous analysis scatter plot: features plotted by their average Functional and Dysfunctional scores, with error bars and importance bubble sizes.
  • Stack-ranked feature list combining potential dissatisfaction, potential satisfaction, and customer-stated importance.
  • Self-stated importance scores: a 1–9 rating per feature collected alongside the question pairs.
  • Written recommendation tying each category to a roadmap action: build, invest, defer, or drop.

Participants and duration

  • Participants: 100 minimum per segment for stable categories; 200+ is preferable.
  • Survey length: 8–15 features, ~30 seconds per feature, total 5–8 minutes.
  • Setup: 2–4 days for feature list, question pairs, survey configuration, and pilot.
  • Field time: 1–2 weeks.
  • Analysis and reporting: 2–4 days, mostly interpretation rather than math.

How to run a Kano study (step-by-step)

1. Choose features and segments

Pick 8–15 candidate features whose presence or absence the user would actually notice. Pick 1–3 customer segments, planning for 100+ respondents per segment.

2. Write the functional and dysfunctional question pair

For each feature, write “How do you feel if you have [feature]?” and “How do you feel if you do not have [feature]?” Phrase the feature in terms of user benefit, not technical implementation. Avoid making the dysfunctional question the polar opposite — describe the absence (“if some videos take longer than 10 seconds”) rather than the opposite (“if all videos take longer than 10 seconds”). Polar wording biases respondents to flip their answer mechanically.

3. Set the response scale

Use the standard 5-point Kano scale: “I like it / I expect it / I am neutral / I can tolerate it / I dislike it.” Some teams prefer alternative wordings like “This would be very helpful / This is a basic requirement / This would not affect me / This would be a minor inconvenience / This would be a major problem.” Pick one wording and stay consistent.

4. Add a self-stated importance question

After the functional/dysfunctional pair for each feature, add a 1–9 importance scale: “How important is this feature to you?” This complements the Kano category with a magnitude dimension.

5. Pilot the survey internally

Test with 5 internal team members. If anyone is confused about the difference between the functional and dysfunctional question, the wording needs more work. Confused respondents produce Questionable answers that destroy data quality.

6. Field the survey

Recruit through your usual channel and field for 1–2 weeks. Match the sample to the population the decision affects. If you are studying multiple segments, monitor recruitment per segment, not just total responses.

7. Score the responses (discrete analysis)

For each respondent, look up the answer pair on the Kano evaluation table to get one of six categories (Must-be, Performance, Attractive, Indifferent, Reverse, Questionable). Tally per feature and pick the dominant category. If two are close, use the priority rule Must-be > Performance > Attractive > Indifferent.

8. Run the continuous analysis (optional)

For 200+ respondents, score answers numerically and calculate the average Functional and Dysfunctional score per feature. Plot each feature on a scatter plot with the Functional score on one axis and Dysfunctional score on the other. The position reveals the category visually, and the standard deviation shows how stable the categorization is.

9. Translate categories into roadmap actions

Build all Must-be features (you have no choice — their absence will kill satisfaction). Add as many Performance features as the budget allows. Sprinkle in 1–2 Attractive features as differentiators. Cut Indifferent features from the roadmap. Reverse features should be inverted (“users want the opposite, so build the opposite”) or dropped.

How AI changes this method

AI compatibility: partial — AI can speed up the survey design, the categorization math, and the interpretation, but it cannot replace the human respondents. Synthetic respondents (asking an LLM to “answer as a target user”) produce unreliable results in customer-preference studies, and Kano specifically depends on emotional reactions to feature scenarios that LLMs cannot reliably simulate.

What AI can do

  • Generate the candidate feature list: An LLM takes a product description and produces 15–20 feature ideas to test, drawing on category conventions and competitive context.
  • Write functional/dysfunctional question pairs: An LLM generates the paired questions in the correct format, automatically handling the “describe the absence, not the opposite” nuance.
  • Pilot the question wording: A model can flag wordings that sound like polar opposites, that contain technical jargon, or that mix benefit with implementation.
  • Run the discrete and continuous analysis: Tools like Conjointly, Qualtrics, Survalyzer, and the Folding Burritos analysis spreadsheet automate the lookup table and continuous scoring.
  • Interpret category assignments: An LLM can produce a first-draft narrative report grouped into “build now,” “differentiator,” “defer,” and “cut.”

What requires a human researcher

  • Choosing the features and segments: These are strategic choices that depend on roadmap context, not data.
  • Real human respondents: Synthetic respondents systematically miss real human emotional reactions. Microsoft, Ipsos, and academic researchers have published evidence that LLM-generated preference data does not match human data on customer-feedback methods.
  • Distinguishing real Must-be from “everyone says yes”: A category that looks like Must-be in the data may reflect respondents agreeing politely. Validating with qualitative interviews is human work.
  • Interpreting temporal drift: Knowing whether today’s “Attractive” will become tomorrow’s “Must-be” requires market and competitive judgment outside the data.
  • Defending the result to stakeholders: Translating “feature X scored 64% Must-be” into a roadmap commitment requires reading the room and tying the data to business goals.

AI-enhanced workflow

Before AI, a Kano study took 3–4 weeks: stakeholder workshops to choose features, manual question writing, survey configuration, fielding, manual lookup of every response in the evaluation table, spreadsheet calculations, and report writing.

With AI in the loop, the analyst drafts the feature list with ChatGPT in 30 minutes, has the model write the question pairs in another 15 minutes, edits both for accuracy, and ships the survey by the end of day one. After fielding, the response data goes into a Conjointly or Qualtrics report for automatic categorization. The first-draft findings report is generated by an LLM, which the analyst edits for tone and verifies for accuracy. The whole workflow can compress from 3–4 weeks to 5–7 days.

The unchanged part is the data itself. Synthetic Kano responses produce data that looks plausible but does not match real human reactions. Microsoft’s UX research team published a re-framing of Kano specifically for AI-powered features that uses real users for exactly this reason: even when studying AI features, the respondents must be human.

Tools

Kano-specific platforms: Conjointly, Qualtrics (Kano question type in CoreXM), Survalyzer, CraftUp Kano Survey Builder (free), SurveyMars.

General survey platforms with Kano support: SurveyMonkey, Typeform, Google Forms, Pollfish, IdeaPlan.

Analysis spreadsheets and scripts: Folding Burritos Excel spreadsheet (free with the complete guide), R packages for Kano scoring, custom Python scripts using pandas.

Visualization: Conjointly interactive reports, Tableau or Looker for custom Kano scatter plots, Excel or Google Sheets for the analysis tables.

AI assistance: ChatGPT or Claude for feature list generation, question writing, response interpretation, and report drafting.

Works well with

  • MaxDiff (Mx): MaxDiff ranks features by relative importance; Kano classifies them by satisfaction effect. Together they answer “what do users want most?” and “are these expected or delightful?”
  • Survey (Sv): Kano is itself a survey method, but the categorization often raises follow-up questions that need open-ended items immediately after the Kano block.
  • In-depth Interview (Di): Kano scores tell you what category each feature falls into; interviews tell you why. Running 5–8 interviews after the survey explains the categories.
  • Concept Testing (Ct): When Kano identifies a feature as Attractive, concept testing builds a quick mockup and validates that the predicted delight shows up in real interaction.
  • Persona Building (Ps): Personas describe segments qualitatively; Kano segmented by persona shows quantitatively which features are Must-be for each persona.

Example from practice

A B2B project management SaaS was planning a major release with budget for 10 new features. The product team had a list of 14 candidates and was arguing about which to cut. A power-user PM championed advanced reporting; a customer-success PM championed onboarding improvements; the design lead pushed for a UI refresh.

The team ran a Kano study with the 14 candidates and 240 customers split across two segments: solo users and team leads. The survey took 8 minutes per respondent. Discrete analysis revealed that the UI refresh was Indifferent for both segments (40% Indifferent, 25% Must-be, 20% Performance — no clear winner), the advanced reporting was Performance for team leads (62%) but Indifferent for solo users (51%), and the onboarding improvements were Must-be for new accounts of both segments. Three features the team had not been arguing about scored as Attractive with high importance: bulk task editing (74% Attractive), integration with Slack (68% Attractive), and personal dashboards (61% Attractive).

The team cut the UI refresh and one of the underperforming reporting variants, kept advanced reporting as a team-leads-only feature, and elevated the three Attractive features to top priority. They shipped the release four months later. Six months after launch, NPS rose 11 points overall, and the three Attractive features were named in 60% of unsolicited customer praise emails.

AI prompts for this method

4 ready-to-use AI prompts with placeholders — copy-paste and fill in with your context. See all prompts for the Kano Model →.