How to run a JTBD Canvas Workshop: a practical guide with AI prompts
The JTBD Canvas Workshop is a collaborative research and alignment method developed by Jim Kalbach (author of The Jobs to Be Done Playbook) that uses a structured canvas to map the elements of a customer’s job-to-be-done. The method combines qualitative interviews with a team-based synthesis workshop, bridging the gap between raw customer insights and actionable product decisions. It synthesizes ideas from both the Ulwick (ODI) and Moesta (Switch Interview) schools into a practical format accessible to cross-functional teams.
Where the JTBD Canvas Workshop fits among JTBD approaches
Jobs to be Done is not a single method but a family of approaches built on a shared premise: people “hire” products to make progress. Three distinct schools have emerged, each with its own research method, philosophy, and output:
- JTBD Switch Interview — Bob Moesta / Clayton Christensen. Qualitative. Studies why people switch between solutions. Outputs: force diagrams, job stories, switching timelines. Best for: marketing, positioning, sales, onboarding, churn reduction.
- Outcome-Driven Innovation (ODI) — Tony Ulwick. Quantitative. Maps desired outcomes of a task and finds underserved ones. Outputs: job maps, opportunity scores, outcome-based segments. Best for: systematic product innovation, feature prioritization, market segmentation.
- JTBD Canvas Workshop — Jim Kalbach. Mixed methods. Practical synthesis of both schools using a canvas format. Outputs: JTBD Canvas, job stories, opportunity maps. Best for: agile teams, cross-functional alignment, bridging research and strategy.
Kalbach’s approach does not treat the Ulwick-Moesta divide as binary. The canvas draws on outcome statements (from ODI), emotional jobs (from the Switch school), and adds its own structural elements — circumstances, related jobs, and aspirations. The result is a method designed for teams that need to understand customers without committing to the full rigor (and cost) of ODI or the narrative depth of Switch Interviews.
What question does the JTBD Canvas Workshop answer?
- What is the core job our target customer is trying to get done, independent of any product or technology?
- What are the steps, success criteria, emotions, and circumstances that shape how they execute this job?
- Where in the job do customers struggle most — which steps generate the most friction and unmet needs?
- What job stories capture the intersection of situation, motivation, and expected outcome for our key pain points?
- How do related jobs and aspirations expand or constrain the scope of what we should build?
- What opportunities should our team prioritize, and how do they connect to business goals?
When to use
- When a cross-functional team (product, design, engineering, marketing) needs a shared understanding of the customer’s job — the canvas creates alignment without requiring everyone to read a 50-page research report.
- When you are starting a discovery phase and need a structured way to frame the problem space before generating solutions.
- When you want to apply JTBD but do not have the budget or timeline for a full ODI study (8-24 weeks) — the canvas workshop can produce actionable insights in 2-4 weeks.
- When you need to translate qualitative interview data into a format that product and engineering teams can work with directly, instead of handing over unstructured notes.
- When you want to combine elements from both JTBD schools — outcome statements from Ulwick’s tradition and emotional/circumstantial context from Moesta’s — in a single framework.
- When your team has existing customer data (support tickets, survey responses, analytics) that needs to be organized through a JTBD lens, not just categorized by feature or demographic.
Not the right method when you need statistically valid prioritization across 100+ outcomes (use ODI), when you need to reconstruct the emotional purchase decision story in depth (use the Switch Interview), when you are testing a specific UI or prototype (use usability testing), or when the team has no prior customer contact — the canvas works best when at least some qualitative data exists to populate it.
What you get (deliverables)
- JTBD Hypothesis Canvas: a single-page visual artifact mapping the job performer, main job, related jobs, aspirations, job steps, success criteria, emotions, and circumstances — the central output of the method
- Job map: 12-20 steps organized chronologically (plan, prepare, execute, monitor, modify, conclude), showing the sequence of how customers get the job done
- Success criteria: 50-100 discrete, measurable statements describing what “done well” looks like at each step, formulated in ODI-compatible format
- Job stories: structured statements in the format “When [situation + circumstance], I want to [progress/outcome], so I can [expected result]” — typically 5-10 priority stories capturing the team’s top findings
- Prioritized pain points: 3-5 steps and 5-10 success criteria where customers struggle most, identified through interview data and team consensus
- Opportunity map: a visual linking prioritized unmet needs to potential solution directions (HMW statements, opportunity solution trees, or pain-matching matrices)
Participants and duration
- Customer interviews: 8-12 job performers who regularly execute the target job. Recruit broadly: past customers, competitor users, people in different contexts. Sessions last 30-60 minutes.
- Workshop participants: 4-8 team members from product, design, engineering, and any stakeholder group that will act on the findings.
- Total duration: 2-4 weeks. Frame phase (scoping + hypothesis): 2-3 days. Discover phase (interviews + analysis): 1-2 weeks. SPIN phase (synthesis + activation): 2-3 days.
How to run a JTBD Canvas Workshop (step-by-step)
1. Define the playing field and select the job performer
Gather your team and identify the domain you are operating in. Avoid industry labels (“healthcare,” “fintech”) — use plain language: “We’re in the business of helping people [verb + object].” From this domain, list all stakeholders and actors. Select one as your job performer — the person who actually executes the job, not a buyer, decision-maker, or influencer. The job performer is not a persona: it is a functional role defined by the job, not by demographics.
2. Identify the main job and map related jobs
Define the main job using the format: [verb] + [object] + [optional clarifier]. The job must be solution-independent, have a clear beginning and end, and use no adjectives (those belong in success criteria). Then map related jobs (other goals at the same level of granularity), aspirations (bigger “be” goals — who the performer wants to become), and initial job steps (sub-jobs within the main job). Place these on the JTBD Hypothesis Canvas.
3. Create a hypothesis job map
Before interviewing customers, build an initial job map by triangulating three sources: the universal job map framework (plan, prepare, execute, monitor, modify, conclude), input from colleagues and subject matter experts, and AI-generated lists of steps. The hypothesis map serves as your interview guide — not a script, but a framework for navigating the conversation.
4. Conduct qualitative interviews with job performers
Interview 8-12 people who regularly perform the job. Use the hypothesis job map as a conversation guide, not a checklist. Start by asking how they last performed the job: what was their first step? Then walk through the chronology: “What did you do before that? What did you do after that?” At each step, probe for success criteria, emotions, circumstances, and struggles. Record and transcribe every interview.
5. Extract and categorize findings
Parse interview transcripts to extract four categories of JTBD elements: job steps (the chronological sequence), success criteria (measurable outcomes), emotions (feelings at each step), and circumstances (situational factors that change how the job is done). Expect to find 12-20 job steps, 50-100 success criteria, 10-20 emotions, and 10-15 circumstances across all interviews.
6. Prioritize unmet needs
Identify which elements represent the biggest pain points. Look for: steps where multiple participants reported struggle, success criteria mentioned by many but satisfied by none, emotions that signal frustration or anxiety, and circumstances that make the job significantly harder. Narrow down to 2-3 priority steps, 5-10 top success criteria, a few dominant emotions, and 3-5 key circumstances.
7. Write job stories and synthesize findings
Combine prioritized elements into job stories using the format: “When [circumstance + situation], I want to [progress/outcome from success criteria], so I can [aspiration or expected result].” Write 5-10 job stories that capture the team’s top findings. These become portable, shareable stakes in the ground that any team member can understand without JTBD training.
8. Activate insights with the team
Translate job stories into actionable formats: “How Might We” statements for ideation sessions, pain-matching matrices comparing current product capabilities against top unmet needs, opportunity solution trees connecting business goals to customer opportunities, or value proposition canvases with JTBD-informed pain points. The activation step is where many JTBD efforts fail — insights that sit in a report do not change decisions.
How AI changes this method
AI compatibility: partial — AI can accelerate hypothesis generation, transcript analysis, and element extraction, but it cannot replace the qualitative interviews, the facilitation of team workshops, or the strategic judgment needed to prioritize and activate insights.
What AI can do
- Generate a hypothesis job map: given a job definition and domain description, an LLM can produce a detailed initial job map with 12-20 steps, success criteria, and likely circumstances. This replaces days of desk research with a 15-minute prompt-and-review cycle.
- Extract JTBD elements from interview transcripts: feed transcripts into an LLM and prompt it to categorize findings into job steps, success criteria, emotions, and circumstances. This reduces analysis from 2-3 hours per transcript to 20-30 minutes of review.
- Draft job stories from prioritized elements: given the top pain points, an LLM can generate candidate job stories in the correct format for the team to review and refine.
- Prepare workshop materials: an LLM can generate “How Might We” statements, populate canvas templates, draft discussion guides, and create pre-reads for workshop participants.
- Identify patterns across interviews: an LLM can compare extracted elements across all interviews and flag which steps, criteria, and circumstances appear most frequently.
What requires a human researcher
- Facilitating team workshops: the JTBD Canvas Workshop is explicitly collaborative. The facilitator reads the room, manages disagreements, keeps conversations on track, and ensures all voices are heard.
- Conducting qualitative interviews: Kalbach emphasizes that AI is “incomplete and lacks real-world context.” In interviews, people reveal things that no published source contains. At least 6-8 interviews with real people remain essential.
- Making strategic prioritization decisions: which unmet needs to target, how to sequence opportunities, what to build first — these decisions require organizational context, competitive awareness, and stakeholder management.
- Validating formulations: success criteria and job stories have strict formatting rules. AI-generated formulations often include adjectives in jobs, solutions in success criteria, or overly broad circumstances.
AI-enhanced workflow
Before AI, the Frame phase required days of desk research and stakeholder interviews just to produce a hypothesis job map. With AI, a researcher can generate a first-draft canvas in under an hour — a hypothesis job map with 15+ steps, candidate success criteria for each step, likely emotions, and common circumstances. The team reviews and adjusts this draft in a 2-hour workshop instead of building from scratch.
The Discover phase benefits most from AI-assisted analysis. Instead of manually parsing 300 pages of transcripts to extract 50-100 success criteria, the researcher feeds each transcript into an LLM with a prompt specifying the four JTBD categories. The LLM produces categorized lists that the researcher validates and corrects in roughly a third of the time.
The SPIN phase changes less, because its value lies in collaboration, not data processing. Job stories and HMW statements can be AI-drafted as starting points, but the team discussion that refines them and commits to action remains fully human. The net effect: the Canvas Workshop timeline can compress from 3-4 weeks to 1.5-2 weeks for teams with AI support, with the time saved concentrated in analysis rather than facilitation.
Beginner mistakes
Confusing job performers with personas or job titles
A job performer is defined by the job they execute, not by who they are. “Software engineer” is a job title; “code reviewer” is a job performer. Conflating these leads to canvas entries that describe demographic attributes instead of functional needs, producing outputs that look like persona cards rather than job analyses.
Defining the main job with adjectives or solutions
“Efficiently manage project timelines using our software” contains two errors: an adjective (“efficiently” belongs in success criteria) and a solution (“our software” makes the job technology-dependent). Correct form: “coordinate team work across multiple projects to deliver on committed timelines.”
Skipping interviews and filling the canvas from assumptions
The canvas format is inviting — it looks like something you can fill out in a brainstorming session. Teams that populate the canvas without interviewing real job performers produce artifacts that reflect internal assumptions rather than customer reality. Kalbach is explicit: AI can supplement, but talking to at least 6-8 real people is essential.
Jumping from findings to solutions before writing job stories
The excitement of discovering customer pain points tempts teams to start designing solutions during the analysis phase. This skips the synthesis step where job stories create a shared, portable representation of findings. Without job stories, different team members walk away with different interpretations of the same data.
Treating the canvas as a one-time exercise
The JTBD Canvas is a living document. As the team builds and ships, new information emerges: customers respond differently than expected, circumstances change, new competitors shift the market. Schedule quarterly reviews of the canvas against recent customer feedback.
Example from practice
A B2B SaaS company building project management software noticed that enterprise customers were requesting features that mid-market customers never asked for, yet both segments had similar job titles and team sizes. Traditional persona-based analysis could not explain the divergence. The product team ran a JTBD Canvas Workshop to investigate.
Over two weeks, the team interviewed 10 project managers — 5 from enterprise accounts and 5 from mid-market — using the main job “coordinating team work across multiple projects to deliver on committed timelines.” The canvas revealed that while both groups shared the same main job, their circumstances differed dramatically: enterprise PMs operated under regulatory reporting requirements (a circumstance that changed the “monitor” and “conclude” steps entirely), while mid-market PMs worked in environments where informal communication replaced formal status updates.
The team wrote separate job stories for each circumstance cluster, then ran a pain-matching workshop against their current product. They discovered that their product strongly served the enterprise monitoring need but had no capability for the mid-market “exception detection” pain point. This led to a new feature (automated blocker detection with Slack notifications) that reduced mid-market churn by 18% within two quarters — a result the team would not have found through feature-request analysis alone.
Tools
Workshop facilitation:
- Miro — digital whiteboard with JTBD canvas templates, ideal for remote workshops
- FigJam (Figma) — GitLab’s preferred tool for JTBD canvases, lighter than Miro
- Physical whiteboards + sticky notes — still the best for in-person workshops
Interview and recording:
- Zoom, Google Meet — remote interview recording
- Otter.ai, Rev.com — transcription for post-interview analysis
- Grain, Dovetail — clip and tag interview moments for team sharing
Analysis and synthesis:
- Dovetail — tag transcripts by JTBD category
- Notion, Airtable — organize extracted elements in structured databases
- Excel / Google Sheets — tracking success criteria counts and priority scores
Templates:
- JTBD Toolkit (jtbdtoolkit.com) — Kalbach’s official canvas templates
- Miro JTBD template — community template with pre-built canvas structure