Skip to content

How to conduct Outcome-Driven Innovation (ODI): a practical guide with AI prompts

Outcome-Driven Innovation (ODI) is a quantitative research process developed by Tony Ulwick (Strategyn) that treats innovation as a measurable discipline rather than a creative gamble. The method discovers what customers need by identifying the desired outcomes they use to measure success when executing a job-to-be-done, then surveys a large sample to find which outcomes are underserved. ODI belongs to the “Jobs-As-Activities” school of JTBD and has demonstrated an 86% product success rate across 1,500+ engagements over three decades.

Where ODI fits among JTBD approaches

Jobs to be Done is not a single method but a family of approaches built on a shared premise: people “hire” products to make progress. Three distinct schools have emerged, each with its own research method, philosophy, and output:

  • JTBD Switch Interview — Bob Moesta / Clayton Christensen. Qualitative. Studies why people switch between solutions. Outputs: force diagrams, job stories, switching timelines. Best for: marketing, positioning, sales, onboarding, churn reduction.
  • Outcome-Driven Innovation — Tony Ulwick. Quantitative. Maps desired outcomes of a task and finds underserved ones. Outputs: job maps, opportunity scores, outcome-based segments. Best for: systematic product innovation, feature prioritization, market segmentation.
  • JTBD Canvas Workshop — Jim Kalbach. Mixed methods. Practical synthesis of both schools using a canvas format. Outputs: JTBD Canvas, job stories, opportunity maps. Best for: agile teams, cross-functional alignment.

The ODI and Switch Interview schools are built on incompatible assumptions about why people buy. Ulwick’s school holds that people want to execute a task (a “job”) and the product should help them do it better — faster, more accurately, with fewer errors. Moesta’s school holds that people do not want to “do work” — they want progress, a change in their life situation. This difference means different interview guides, different questions, and different outputs. Choose based on your research goal: if you need to understand purchase motivation, use the Switch Interview; if you need to systematically find innovation opportunities across a large outcome space, use ODI.

What question does Outcome-Driven Innovation answer?

  • What are the 100-150 measurable outcomes customers use to judge whether they are doing their job well?
  • Which of these outcomes are underserved — important to customers but poorly satisfied by current solutions?
  • Which outcomes are overserved — heavily addressed by existing products but not actually important to customers?
  • Are there hidden segments of customers who struggle with different sets of outcomes than the mainstream?
  • Where exactly should R&D, product, and marketing investments be directed to achieve the highest growth impact?
  • What is the size of each opportunity, measured by the gap between importance and satisfaction?

When to use

  • When you need to prioritize features or investments across a large set of customer needs with statistical confidence — not gut feeling. ODI replaces opinion-driven roadmaps with data-driven ones.
  • When entering a new market and you need to discover what incumbents have missed. The opportunity algorithm reveals underserved outcomes that competitors have overlooked.
  • When the product has reached maturity and incremental improvements deliver diminishing returns. ODI identifies which outcomes still have significant room for improvement.
  • When you need to segment the market by actual customer needs instead of demographics. Outcome-based segments reveal groups of people with different unmet needs — segments invisible to traditional demographic or behavioral segmentation.
  • When you want to evaluate competitive positioning by scoring each competitor’s products against the full set of customer outcomes. This reveals white space and defensible differentiation opportunities.
  • When multiple teams (product, engineering, marketing, sales) need a shared, objective language for discussing customer needs. Outcome statements provide that common vocabulary.

Not the right method when you need to understand the emotional and social context of a purchase decision (use the Switch Interview), when you lack the budget for a quantitative survey of 180-600 respondents, when the product category is entirely new and customers cannot articulate outcomes for a job they have never performed, when speed matters more than rigor — ODI typically takes 8-24 weeks, or when you are optimizing a UI interaction (use usability testing).

What you get (deliverables)

  • Job map: a visual breakdown of the customer’s job into 8-12 sequential steps (define, locate, prepare, confirm, execute, monitor, modify, conclude), each generating its own set of desired outcomes
  • Desired outcome statements: 100-150 measurable, solution-independent statements in a standard format: “[direction of improvement] + [unit of measure] + [object of control] + [contextual clarifier]” — e.g., “minimize the time it takes to locate the relevant blood vessel”
  • Opportunity scores: each outcome scored using the Opportunity Algorithm (opportunity = importance + max(0, importance − satisfaction)), revealing where the market is underserved and overserved
  • Opportunity landscape: a scatter plot of all outcomes on importance × satisfaction axes, showing clusters of opportunity and overservice
  • Outcome-based segments: groups of customers defined by which outcomes they find underserved — often 3-6 segments that are invisible to demographic segmentation
  • Innovation strategy: a document specifying which segments to target, which outcomes to address, what type of strategy to pursue (differentiated, disruptive, dominant), and how to position existing and new products

Participants and duration

  • Qualitative phase (outcome discovery): 12-30 interviews with people who regularly perform the job-to-be-done. These are not traditional user interviews — the goal is to extract measurable outcome statements, not stories or feelings. Participants should represent different segments and experience levels.
  • Quantitative phase (opportunity survey): 180-600+ respondents who perform the job. Each respondent rates every outcome on two scales: importance (1-5) and satisfaction (1-5). With 100+ outcomes, the survey contains 200+ rating questions. Statistical validity requires a minimum of 180 responses; 360+ enables reliable segment analysis.
  • Session length: qualitative interviews last 45-60 minutes. The survey takes 15-25 minutes depending on outcome count.
  • Total duration: 8-24 weeks. Job definition and scope: 1-2 weeks. Qualitative interviews and outcome extraction: 3-6 weeks. Survey design and fielding: 2-4 weeks. Analysis, segmentation, and strategy: 2-6 weeks.

How to conduct Outcome-Driven Innovation (step-by-step)

1. Define the market around the customer’s job-to-be-done

Pick a single job that your target customers are trying to accomplish. Define it as a verb + object + contextual clarifier, independent of any product or technology. For example: “tradesmen cutting wood in a straight line” or “parents passing on life lessons to children.” The job definition must be stable over time (valid for decades, not tied to current technology), solution-independent (no product names), and customer-centric (from the customer’s perspective, not the company’s). Test your definition by asking: would this job exist if our product disappeared? If yes, the definition is correct.

2. Build the job map

Break the job into its component steps using Ulwick’s universal job map framework: define what needs to be done, locate the necessary inputs, prepare the inputs and the environment, confirm readiness, execute the core task, monitor the results, make modifications if needed, and conclude the job. Each step becomes a source of desired outcomes. The map is not a user flow or a process diagram — it describes how the customer thinks about the job, not how your product works.

3. Conduct qualitative interviews to extract outcome statements

Interview 12-30 people who perform the job regularly. For each step in the job map, ask: “When you are [doing this step], what does success look like? How do you know you are doing it well? What could go wrong?” Translate every answer into a formal outcome statement: “[direction of improvement] + [performance metric] + [object of control] + [contextual clarifier].” Directions are: minimize, increase, reduce the likelihood of. Example: “minimize the time it takes to identify the correct drill bit for the material.” Aim for 100-150 outcomes across all job steps. Remove duplicates, validate wording with participants, and group outcomes by job step.

4. Design and field the quantitative survey

Create a survey where respondents rate each outcome on two dimensions: importance (how important is this outcome to you when performing the job?) and satisfaction (how satisfied are you with how well current solutions help you achieve this outcome?). Both scales run 1-5. Add demographic and behavioral questions for later segmentation analysis. Keep the survey as compact as possible — group outcomes by job step, use consistent layout, and test extensively before launch. Target 180-600 respondents. Use incentives: ODI surveys demand sustained attention across 200+ rating questions.

5. Calculate opportunity scores and plot the opportunity landscape

Apply the Opportunity Algorithm to each outcome: opportunity = importance + max(0, importance − satisfaction). Scores range from 0 to 10. Outcomes scoring above 10 are underserved (high opportunity). Outcomes scoring below 7 with low importance are overserved (candidates for cost reduction). Plot all outcomes on a scatter chart with importance on the Y-axis and satisfaction on the X-axis. The upper-left quadrant contains the highest-value innovation opportunities.

6. Discover outcome-based segments

Run cluster analysis (k-means, latent class, or hierarchical clustering) on the importance and satisfaction ratings to identify groups of respondents with different unmet-need profiles. Typically 3-6 segments emerge. Characterize each segment by its dominant unmet outcomes, then profile it with demographic and behavioral data to make it reachable. These segments are invisible to competitors using traditional segmentation methods — demographic, psychographic, or behavioral — because they are defined by what people need, not who they are.

7. Formulate the innovation strategy

For each segment, decide: which existing products can be repositioned to serve this segment? Which products need improvement? What new products must be created? Use three strategic lenses: product-segment fit (which product targets which segment), product-strategy fit (differentiated for segments with many underserved outcomes, disruptive for overserved segments, dominant when both exist), and product-market fit (which specific outcomes to address to win the segment). The strategy should specify concrete outcome targets — not vague goals like “improve the user experience.”

8. Validate with ideation and concept testing

Use the underserved outcomes as constraints for ideation: every concept must demonstrably improve satisfaction on targeted outcomes. Score proposed concepts against the opportunity landscape. This replaces brainstorming-then-testing with a directed ideation process where the success criteria are defined before any ideas are generated. Test top concepts with a follow-up survey using the same outcome framework to verify that the new concept addresses the targeted outcomes better than existing solutions.

How AI changes this method

AI compatibility: partial — AI can accelerate several labor-intensive steps in the ODI process, particularly outcome statement extraction, survey analysis, and segmentation. However, the core intellectual work of defining the job, conducting qualitative interviews, and making strategic decisions requires experienced human judgment.

What AI can do

  • Extract outcome statements from interview transcripts: feed transcripts into an LLM and prompt it to identify statements that match the ODI format (direction + metric + object + clarifier). This reduces a task that takes 2-3 hours per interview to 15-20 minutes of review and refinement.
  • De-duplicate and consolidate outcomes: with 100-150 raw outcomes from 20+ interviews, many overlap. An LLM can cluster similar outcomes, flag duplicates, and propose merged wording — reducing consolidation time from days to hours.
  • Analyze survey data and calculate opportunity scores: Python or R scripts handle the Opportunity Algorithm, but an LLM can generate those scripts, interpret the results in plain language, and flag surprising patterns in the data.
  • Generate segment profiles from clustering output: once statistical clustering produces segments, an LLM can characterize each segment by identifying its top unmet outcomes, describing its needs in narrative form, and suggesting product positioning angles.
  • Draft the job map: given a job definition and background research, an LLM can propose an initial job map with 8-12 steps. The researcher reviews, adjusts, and validates with customers.
  • Survey design assistance: an LLM can format 150 outcome statements into survey-ready questions, apply consistent wording, and generate the survey structure for import into tools like Qualtrics or SurveyMonkey.

What requires a human researcher

  • Defining the job-to-be-done: the job definition determines everything downstream. Defining a job too broadly produces outcomes so generic they cannot guide innovation. Defining it too narrowly misses adjacent opportunities. This requires deep domain knowledge and strategic judgment.
  • Conducting qualitative interviews: the outcome discovery interview is a specialized skill. The researcher must push past feature requests and solutions to extract the underlying performance metric. This requires real-time rapport, probing, and pattern recognition that AI cannot replicate.
  • Validating outcome statement quality: each outcome must be measurable, solution-independent, stable over time, and unambiguous. AI-generated outcomes frequently violate one or more of these criteria — the researcher must catch and fix these errors.
  • Making strategic decisions: which segments to target, what strategy to pursue, how to position the product — these decisions require competitive knowledge, organizational constraints, and business judgment that no model possesses.

AI-enhanced workflow

Before AI, the most time-consuming part of ODI was the qualitative phase: conducting 20+ interviews, manually extracting outcome statements from each transcript, consolidating 300+ raw outcomes into a clean list of 100-150, and validating wording. This phase alone could take 4-8 weeks.

With AI assistance, a researcher can record interviews via Zoom with Otter.ai transcription, feed each transcript into an LLM to extract candidate outcome statements, then review and refine in a fraction of the time. The consolidation step — comparing outcomes across all interviews, merging duplicates, standardizing wording — drops from several days to a focused half-day session where the researcher reviews AI-proposed clusters.

The quantitative phase benefits too: once survey data arrives, an LLM can generate analysis scripts, run the Opportunity Algorithm, produce the opportunity landscape visualization, execute cluster analysis for segmentation, and draft segment profiles. The researcher shifts from doing the math to interpreting the results and making strategic calls. The total timeline for a well-prepared team can shrink from 16-24 weeks to 8-12 weeks without sacrificing rigor.

Beginner mistakes

Defining the job too broadly or too narrowly

A job defined as “being productive at work” is too broad — the outcomes will be so generic they cannot guide product decisions. A job defined as “using the cut function in Adobe Illustrator” is too narrow — you are describing a product step, not a customer job. The right level is functional and specific but solution-independent: “creating a presentation that persuades a client to approve a project.” Test with the “product disappearance” test: if your product vanished, would this job still exist?

Writing outcome statements that contain solutions

“Minimize the time it takes to find the right filter in Photoshop” contains a solution (Photoshop). Correct form: “minimize the time it takes to isolate the desired visual effect in an image.” Every outcome must be solution-independent, or the analysis will be biased toward existing products. This is the most common error in ODI projects and the hardest to train out of interview teams.

Fielding a survey that is too long without adequate incentives

With 100+ outcomes rated on two dimensions, the survey easily exceeds 200 items. Without careful design (grouping by job step, progress indicators, clean layout) and adequate incentives ($15-25 per completed response or equivalent prize pool), completion rates will be catastrophically low and the data unreliable. The Envato team used color coding, friendly wording, and extensive pre-testing to make their 94-outcome survey manageable.

Treating opportunity scores as absolute truths

An opportunity score is a relative ranking, not an absolute measure of market value. A score of 12 means “more underserved than a score of 8” — it does not mean “this is a $12M opportunity.” The scores guide prioritization, not business cases. Teams that report opportunity scores to executives without this context create false precision.

Skipping segmentation and treating the market as monolithic

The overall opportunity landscape aggregates all respondents, masking the fact that different groups of customers have different unmet needs. A product that addresses the top overall opportunities may perfectly serve one segment while completely missing another. Always run segmentation analysis — the hidden segments are often where the most valuable opportunities lie.

Example from practice

Bosch needed to enter the North American circular saw market, dominated by established incumbents with strong brand loyalty. Traditional market research would have focused on feature benchmarking and price positioning. Instead, Bosch applied ODI to understand how tradesmen execute the job of “cutting wood in a straight line.”

Through qualitative interviews with 24 professional tradesmen and a survey of 400+ respondents, the team identified 75 desired outcomes across 10 job steps. The Opportunity Algorithm revealed 14 significantly underserved outcomes — things like minimizing the time it takes to adjust the saw’s cutting angle, reducing the likelihood of the blade binding during a rip cut, and minimizing material waste from imprecise cuts. Competitors had focused on motor power and brand reputation while ignoring these execution-level needs.

Bosch designed a new circular saw that addressed all 14 underserved outcomes. The product launched and captured significant market share in a category where no new entrant had succeeded in years. The case illustrates ODI’s core value proposition: instead of guessing what features will win, the team knew with statistical certainty which outcomes were underserved and built a product that addressed them.

Tools

Qualitative research:

  • Zoom, Google Meet — remote interview recording
  • Otter.ai, Rev.com — interview transcription
  • Dovetail, EnjoyHQ — qualitative data tagging and outcome extraction

Survey design and fielding:

  • Qualtrics — the standard for complex surveys with 200+ items and branching logic
  • SurveyMonkey — simpler alternative, works for smaller outcome sets
  • Typeform — user-friendly survey experience, limited for large ODI surveys

Analysis and scoring:

  • R (with Xavier Russo’s ODI package on GitHub) — open-source ODI analysis scripts
  • Python (pandas, scikit-learn) — custom opportunity scoring and cluster analysis
  • SPSS — traditional statistical analysis for segmentation
  • Excel / Google Sheets — sufficient for opportunity scoring on smaller datasets

Visualization:

  • Tableau, Power BI — opportunity landscape charts and segment visualizations
  • Miro, FigJam — job mapping workshops and strategy alignment

Proprietary:

  • ODIpro (Strategyn) — Strategyn’s proprietary platform for the full ODI workflow, from outcome collection through strategy development