Skip to content
research-methodsjtbdproduct-research

Three schools of Jobs to be Done: how a milkshake story split into three research methods

In 1999, a consultant named Tony Ulwick walked into Clayton Christensen’s office at Harvard Business School and showed him a framework for measuring what customers want. Christensen was impressed. He included Ulwick’s examples in his 2003 book The Innovator’s Solution and gave the concept a catchy name: Jobs to be Done.

Around the same time, another consultant — Bob Moesta — was working with Christensen on a different angle. Moesta was less interested in what tasks people perform and more interested in why they make the decisions they make. He studied home buyers, mattress shoppers, and fast-food customers, reconstructing the emotional story behind each purchase.

By the mid-2010s, both Ulwick and Moesta were running successful consulting firms, both claiming to practice “JTBD,” and both meaning entirely different things by it. A third approach — Jim Kalbach’s practical canvas — appeared in 2020, bridging the gap for teams that just wanted a usable tool.

Today, if you google “How to do JTBD research,” you will find advice from all three schools mixed together, often on the same page, without any indication that the methods are not interchangeable. A researcher who follows Moesta’s interview script while trying to fill Ulwick’s opportunity scorecard will produce confused data that answers neither question.

This article explains what each school actually does, where their philosophies diverge, and how to choose the right one.

The disagreement at the root

The three schools agree on one premise: people do not buy products for their features. They “hire” products to get something done in their lives. Beyond that, the agreement ends.

Moesta’s question: Why did you switch? What was your life like before, and what pushed you to change? The answer reveals motivation — the emotional and situational context behind a purchase decision.

Ulwick’s question: What are you trying to accomplish, and how well does the current solution perform? The answer reveals unmet outcomes — measurable gaps in how well a task gets done.

Kalbach’s question: What is the job, and can the whole team agree on it? The answer produces alignment — a shared canvas that product, design, and engineering can reference.

Alan Klement, a JTBD theorist who worked with Christensen’s team, put the distinction sharply. He calls Moesta’s approach Jobs-As-Progress and Ulwick’s approach Jobs-As-Activities, and argues they are not just different methods but incompatible models of why people buy.

Moesta’s model says people do not want to mow the lawn. They want a home that looks cared for, and they will hire any solution that delivers that — including a lawn service, an automated mower, or artificial turf. The activity is incidental.

Ulwick’s model says people do want to mow the lawn, and the product should help them do it with fewer passes, less noise, and a more even cut. The activity is the unit of analysis.

Both models have produced real business results. The question is not which is theoretically correct but which matches what you need to learn.

The mini periodic table of JTBD

We are building a periodic table of research methods — an interactive tool for choosing the right method for the right question. Three of the elements in that table are JTBD methods. Here they are, side by side.

Proactive (future)
Reactive (past)
Qualitative
Quantitative
Filter by question
Filter by role

Each element links to a full method guide with step-by-step instructions, AI prompts, and a checklist:

We take your business from where it is to where it needs to be
From project design to actionable, implementable insights.
Cloud Research Global →

How each method works in practice

Switch Interview: the documentary about your customer’s decision

Moesta describes his interview technique as shooting a documentary. You sit down with someone who recently bought your product (or left it) and ask them to tell you the story of how they got there. Not “what features did you like” but “tell me about the day you decided to sign up — what was happening?”

The interviewer maps the story onto six stages of the buying timeline: first thought, passive looking, active looking, deciding, onboarding, ongoing use. At each stage, four forces are in play:

  • Push (F1): Frustration with the current situation. “Monday reporting was eating my entire morning.”
  • Pull (F2): The attraction of something new. “I saw a demo and imagined getting Mondays back.”
  • Anxiety (F3): Fear of change. “What if I migrate everything and it doesn’t work?”
  • Habit (F4): Comfort with the status quo. “My whole team knows the old spreadsheet.”

The switch happens when push + pull outweigh anxiety + habit. The most actionable finding is usually in F3 — a specific fear that the product team can address through design, messaging, or onboarding.

A typical study involves 8-12 interviews, takes 2-4 weeks, and costs relatively little beyond participant incentives. The output goes straight to marketing (positioning, messaging, ad targeting) and product (onboarding redesign, churn prevention).

ODI: the scorecard for unmet needs

Ulwick’s approach starts from the premise that every “job” (an activity customers want to perform) consists of dozens of desired outcomes — measurable criteria for how well the job gets done. “Minimize the time it takes to create a playlist in the correct order” is an outcome for the job “listen to music.”

The research happens in two phases. First, qualitative interviews with 15-20 people who perform the job. The researcher extracts 50-150 desired outcome statements, each following a strict format: direction of improvement + metric + object of control. These are not features — they are performance criteria customers use to judge any solution.

Second, a quantitative survey asks 50-200+ respondents to rate each outcome on importance and current satisfaction. The Opportunity Score (importance + the gap between importance and satisfaction) reveals which outcomes are underserved. Statistical segmentation finds groups of customers who share similar unmet needs — segments defined not by demographics but by what they struggle with.

A typical study takes 4-8 weeks, requires statistical expertise, and costs significantly more than a Switch Interview. The output goes to product strategy: which outcomes to target, which segments to serve, which innovation strategy to pursue (differentiated, dominant, disruptive, or discrete).

Canvas Workshop: the alignment tool

Kalbach’s approach is less about discovery and more about structured decision-making. A cross-functional team sits down in a workshop, defines the domain (where do we want to innovate?), the job performer (who are we innovating for?), and the main job (what are they trying to accomplish?).

After the workshop, 5-10 investigation interviews with real job performers validate and enrich the canvas. The team fills in job steps, desired outcomes, emotional and social aspects, and job differentiators. A second workshop prioritizes these elements. An optional survey validates the prioritization at scale.

The canvas itself becomes the shared reference document — a single artifact that product, design, engineering, and marketing can all point to when making decisions. GitLab published their entire JTBD process as an open-source playbook, making the Canvas approach accessible to any team.

A typical study takes 3-6 weeks and works best when the goal is team alignment rather than deep discovery or market sizing.

Which role benefits from which school

Different teams extract different value from different schools. This is not about theory — it reflects how the outputs of each method map onto daily work.

Marketing gravitates toward the Switch Interview. The four forces framework produces the exact language customers use when describing their situation — that language becomes ad copy, landing page headlines, and email sequences. Trigger events from the interviews become targeting criteria: reach people at the moment they feel the push. Moesta named his book Demand-Side Sales for a reason — the method was built for understanding how demand forms.

Sales also relies on the Switch Interview, but for different outputs. The anxiety force (F3) maps directly onto buyer objections. When a salesperson knows that the most common anxiety is “What if my team doesn’t adopt it?”, they can address it before the prospect raises it. The competitive map — built from what buyers actually considered, not what the company assumes — becomes battle cards.

Product management leans toward ODI. Opportunity scores give product managers a quantitative answer to “what should we build next?” that does not depend on the loudest stakeholder or the most recent customer complaint. Outcome-based segments tell engineering who they are building for and what “better” means in measurable terms.

Strategy and C-level use ODI for market sizing, segmentation by unmet needs, and innovation strategy decisions (differentiated, dominant, disruptive, or discrete). The data supports investment decisions with more confidence than qualitative interviews alone.

UX researchers often start with the Canvas Workshop to align the team on the job, then run Switch Interviews for deep discovery. ODI enters when quantitative validation is needed. The Canvas serves as the shared artifact that product, design, and engineering reference throughout the project.

Customer success benefits from Switch Interviews conducted with churners — people who “fired” the product. The four forces reveal what pushed them out and what could have retained them, giving CS teams specific retention levers rather than generic “check in more often” advice.

Founders and startups almost always start with Switch Interviews. The method is cheap (8-12 interviews, no survey infrastructure), fast (2-4 weeks), and produces immediate GTM insights. ODI requires resources most startups do not have. The Canvas is useful once the team grows beyond 3-4 people.

RolePrimary schoolWhat they get from it
MarketingSwitch InterviewCustomer language for copy, trigger events for targeting
SalesSwitch InterviewObjection handling (F3 anxiety), competitive battle cards
Product managementODIFeature prioritization by opportunity scores
Strategy / C-levelODIMarket sizing, outcome-based segments, innovation strategy
UX researcherCanvas + SwitchTeam alignment artifact + deep qualitative discovery
Customer successSwitch (with churners)Retention levers from the four forces
Founder / startupSwitch InterviewFast, cheap GTM insights from 8-12 interviews

Decision guide: which school for which question

Your questionMethodWhy
Why do people buy (or not buy) our product?Switch InterviewReveals purchase motivation, not task performance
Which features should we build? Where is the biggest unmet need?ODIMaps the full outcome space and quantifies the gaps
How do we align the team on what we are building for?Canvas WorkshopCreates a shared artifact for cross-functional teams
Is there demand in a new market?Switch Interview first, then ODISwitch finds if people are switching; ODI maps outcomes if market is viable
Why is churn so high?Switch Interview (with churners)Four forces reveal what drives people out and what could retain them
How do we position against competitors?Switch InterviewThe competitive map comes from what alternatives buyers actually considered
We need to segment our market by needs, not demographicsODIOutcome-based segmentation finds hidden segments no survey can

How AI changes JTBD research — and where it does not

All three JTBD methods involve labor-intensive steps that AI can accelerate, but each method has a different bottleneck and a different human-dependent core. The table below maps what changes and what stays the same.

StepSwitch InterviewODICanvas Workshop
PreparationAI can draft a timeline framework and suggest probing questions for the six stages. The researcher still defines the switching decision to study and recruits participants with recent memory.AI can generate a hypothesis job map with 15-20 steps and candidate outcome statements. The researcher still defines the job-to-be-done — the single decision that shapes everything downstream.AI can populate an initial canvas with job steps, likely emotions, and circumstances in under an hour. The team still needs to agree on the job performer and main job through discussion, not delegation.
InterviewsAI cannot conduct a Switch Interview. The method depends on real-time rapport, emotional probing, and the interviewer’s ability to follow the participant’s story wherever it leads. Synthetic respondents miss the surprising details that make JTBD research valuable.AI cannot replace outcome discovery interviews. The researcher must push past feature requests to extract measurable, solution-independent outcome statements — a skill that requires live judgment.Kalbach is explicit: AI is “incomplete and lacks real-world context.” At least 6-8 real interviews are needed. AI can supplement, not substitute.
AnalysisAI reduces transcript analysis from 2-3 hours per interview to 20-30 minutes. An LLM can extract the four forces, map the purchasing timeline, and identify trigger events. The researcher reviews, corrects, and spots patterns across interviews.AI cuts the consolidation of 300+ raw outcomes from days to hours. An LLM clusters duplicates, standardizes wording, and flags outcomes that violate ODI formatting rules. The researcher validates every statement.AI extracts JTBD elements (steps, criteria, emotions, circumstances) from transcripts and identifies which appear most frequently. Cross-interview pattern detection drops from a full day to minutes.
SynthesisAI can draft force diagrams and job stories. It cannot decide which stories matter most for the business or how to position the product — that requires competitive awareness and strategic judgment.AI can calculate opportunity scores, run cluster analysis, generate segment profiles, and draft the opportunity landscape. It cannot decide which segments to target or what innovation strategy to pursue.AI can draft job stories and HMW statements. It cannot facilitate the team workshop where these artifacts get debated, refined, and turned into commitments.

The pattern across all three methods: AI compresses the mechanical middle — transcription, extraction, scoring, pattern matching — but leaves the strategic bookends intact. The researcher still frames the question at the start and makes the decisions at the end. Skipping either bookend produces fast but unreliable results.

Timeline impact: teams with AI support report roughly 40-50% reduction in total project time, concentrated in the analysis phase. A Switch Interview project drops from 3-4 weeks to 2 weeks. An ODI study drops from 16-24 weeks to 8-12. A Canvas Workshop drops from 3-4 weeks to 1.5-2 weeks. The interviews themselves take the same time — AI does not make conversations shorter.

The risk to watch for: AI-generated JTBD artifacts (force diagrams, outcome statements, job stories) look plausible even when they are wrong. An LLM will produce a clean force diagram from a transcript where the interviewer never actually probed the anxiety force — it will infer what the anxiety “probably” was. The researcher must treat AI output as a first draft to verify, not a finished product to deliver.

What does not work

Mixing methods in the same study. Running ODI-style outcome interviews and Moesta-style switching interviews with the same participants, in the same session, confuses them and contaminates the data. Pick one method per study.

Applying Switch Interview findings to feature prioritization. Job stories tell you what people want to become, not which feature to build next. If you need feature-level prioritization, you need ODI’s opportunity scores.

Using ODI to fix churn. Opportunity scores tell you where the task is underserved, but they will not tell you what emotional barrier stopped someone from switching or what habit kept them with a competitor. For that, you need the four forces.

Skipping the method and just “doing JTBD.” The phrase “we do JTBD” without specifying which school is like saying “we do statistics” without specifying whether you mean descriptive, inferential, or Bayesian. The tools are different, and using the wrong one gives you data that does not answer your question.

The books worth reading

If you want to go deeper into any school:

For the Switch Interview school, start with Moesta’s Demand-Side Sales 101 — short, practical, full of interview examples. Then read Christensen’s Competing Against Luck for the theory. For a free deep dive into the theoretical foundations, download Klement’s When Coffee and Kale Compete.

For ODI, read Ulwick’s Jobs to Be Done: Theory to Practice. It is the most complete description of the ODI process, with case studies from Strategyn’s consulting work.

For the Canvas approach, read Kalbach’s The Jobs to Be Done Playbook. It is hands-on, with templates and workshop facilitation guides. Also check the GitLab JTBD Playbook, which is open-source and battle-tested.