How to create a mental model map: a practical guide with AI prompts
A health insurance company noticed that its mobile app had high download numbers but low engagement — people installed it, checked one thing, and never returned. The product team assumed the problem was missing features and proposed adding a claims tracker, a doctor search, and a virtual ID card.
Before building anything, the UX research team ran 18 listening sessions with policyholders, asking each person to describe how they think about and manage their health insurance. The resulting mental model diagram revealed seven mental spaces. Three of them — “understanding what my plan covers,” “deciding whether a medical cost is worth it,” and “preparing for a doctor visit” — were densely populated with behaviors but had zero product features mapped beneath them. The app’s existing features (claims status, plan summary, provider network) all clustered beneath two mental spaces that participants described briefly and without emotional weight.
The gap analysis showed that people spent most of their cognitive energy on questions the app never addressed: “Will this be covered?”, “How much will I actually pay?”, and “What should I tell the doctor about my plan?” The team redesigned the app’s roadmap to address these gaps, starting with a coverage estimator that answered the question “will this be covered?” in plain language. Six months after launch, the coverage estimator became the app’s most-used feature and weekly active users increased by 34%.
That outcome is what mental model mapping is designed to produce: a shift from “we think we know what users need” to “we can see the full structure of how they think and build for the spaces we’re missing.”
What mental model mapping actually is
Mental model mapping is a qualitative research method that produces a large-scale diagram representing how people think about and approach a particular area of life or work, independent of any specific tool or product. Developed by Indi Young, the method captures the reasoning, reactions, and guiding principles behind people’s behavior through in-depth listening sessions, then organizes those behaviors into a structured diagram that reveals where existing products and services support people’s thinking and where they leave gaps.
What questions it answers
Mental model mapping addresses questions about the cognitive structure behind behavior:
- How do people actually think about and approach this domain — what reasoning, emotional reactions, and guiding principles drive their behavior?
- Where does our product or service support people’s existing thought processes, and where does it fail to match the way they already think?
- What unmet needs exist that no current product or feature addresses, because nobody has mapped the full scope of how people think about this area?
- Which mental spaces (clusters of related thinking) represent the biggest opportunity for new features, services, or content?
- How do different audience segments differ in the way they approach the same domain — where do their mental models diverge?
- What assumptions has the team been making about how users think that are contradicted by the actual patterns in the data?
When to use
- When a team needs to understand a broad behavioral domain (e.g., “how people manage their finances,” “how people decide what to eat”) rather than a specific product interaction, and existing research methods feel too narrow.
- When the product strategy needs to be grounded in real human reasoning rather than assumptions about what users want — especially before building anything new.
- When the team has accumulated a large volume of qualitative data (interview transcripts, listening session notes, support conversations) and needs a structured method to synthesize it into an actionable map.
- When a content strategy, information architecture, or feature roadmap needs to reflect the way users actually organize their thinking, not the way the company’s internal departments are structured.
- When comparing what people do and think against what the product currently offers, to identify gaps that represent concrete opportunities.
- When a journey map or persona is insufficient because the team needs to understand the cognitive structure behind behavior, not just the behavior itself.
Not the right method when the team needs quick, tactical answers about a specific feature or interface. Mental model mapping requires substantial research investment — typically 15-20 listening sessions and days of coding and synthesis. If the question is “should this button be green or blue?” or “does this checkout flow work?”, a usability test will answer faster and more directly. The method also requires access to participants who can describe their reasoning in depth; it does not work well with populations who have difficulty articulating their thought processes in an interview setting.
What you get (deliverables)
- A mental model diagram: a large-format visualization with two halves. The top half shows towers of user behaviors (tasks, philosophies, reactions) grouped into mental spaces. The bottom half maps existing product features, content, and services aligned beneath the corresponding towers.
- Gap analysis: specific areas in the diagram where towers have no supporting features beneath them, indicating unmet needs and opportunities for new functionality.
- Alignment map: a view of where the product over-serves (features that map to mental spaces users barely think about) and under-serves (mental spaces with deep, rich towers but little or no product support).
- Mental space inventory: a labeled set of distinct cognitive clusters that describe how users organize their thinking about the domain.
- Behavioral patterns: recurring reasoning, reactions, and guiding principles that appear across multiple participants, with verbatim quotes as evidence.
- Strategic recommendations: a prioritized list of opportunities derived directly from the gaps and alignments in the diagram.
Participants and duration
Listening sessions: 15-20 participants who represent the target audience segments. Each listening session lasts 60-90 minutes and follows an open-ended format focused on the participant’s reasoning and approach to the domain, not on product feedback. Recruit people who have recent, relevant experience with the domain area.
Analysis team: 1-3 researchers. Mental model mapping is analytically intensive but does not require large workshops — one researcher can code and structure the data, with a second researcher validating the groupings.
Coding and grouping: 3-5 days. Each transcript is combed for individual behaviors (tasks, feelings, philosophies), which are written as short phrases and sorted into towers and mental spaces.
Diagram construction: 2-3 days. Arranging towers into mental spaces and mapping product features beneath them.
Total timeline: 4-6 weeks (recruitment: 1 week; listening sessions: 1-2 weeks; coding and synthesis: 1-2 weeks; diagram and report: 1 week).
How to conduct mental model mapping (step-by-step)
1. Define the behavioral domain and audience segments
Identify the domain you want to map. A domain is an area of life or work, not a product: “how people manage their health” rather than “how people use our health app.” The domain should be broad enough to reveal the full scope of thinking but bounded enough to be covered in 15-20 listening sessions. Identify 2-4 audience segments whose thinking about this domain might differ meaningfully.
2. Recruit participants based on behavior, not demographics
Recruit people who have recent, concrete experience with the domain. Screen for behavioral diversity, not demographic diversity: someone who manages their health proactively vs. someone who engages only when something goes wrong, for example. Aim for 15-20 participants distributed across your identified segments. Avoid recruiting people who work in UX, design, or adjacent fields — their answers tend to be meta-commentary rather than genuine descriptions of their own thinking.
3. Conduct listening sessions
Run 60-90 minute one-on-one sessions. These are not traditional interviews — the researcher asks one broad opening question about the domain and then lets the participant lead the conversation. The researcher’s role is to listen, prompt the participant to go deeper (“tell me more about that,” “what were you thinking at that point?”), and resist the urge to steer the conversation toward the product. Record and transcribe every session. The focus is on the person’s reasoning, emotional reactions, and guiding principles — not on their opinions about specific tools.
4. Comb transcripts for individual behaviors
Go through each transcript and extract every distinct behavior the participant describes. A “behavior” in Indi Young’s framework includes three types: tasks (things people do), philosophies (beliefs and guiding principles that shape decisions), and feelings (emotional reactions to situations). Write each behavior as a short, present-tense phrase from the participant’s perspective: “compares prices at three stores before buying,” “feels anxious when a payment is pending,” “believes fresh ingredients are worth the extra cost.”
5. Group behaviors into towers
Print the behaviors on individual sticky notes (or use digital equivalents in Miro, FigJam, or a spreadsheet). Group related behaviors into vertical stacks called towers. Each tower represents a cluster of behaviors that belong together because they describe the same narrow activity or concern. A tower might be called “checking ingredient quality” and contain 8-12 behavior phrases from different participants. Keep towers narrow and specific — if a tower gets too broad, split it.
6. Organize towers into mental spaces
Group related towers into horizontal sections called mental spaces. A mental space represents a larger area of thinking within the domain. For a “managing health” domain, mental spaces might include “monitoring daily symptoms,” “deciding when to see a doctor,” “navigating insurance,” and “choosing treatments.” Arrange mental spaces in a sequence that reflects the natural flow of how people think about the domain — though this flow need not be strictly chronological.
7. Build the top half of the diagram
Lay out the mental spaces and their towers in a horizontal diagram. Each mental space is a labeled section. Within each section, towers stand as vertical stacks of behavior phrases. The height of a tower indicates the density of behavior around that topic — tall towers signal areas where participants have a lot to say, which often correlates with importance or complexity.
8. Map existing product features beneath the towers
Create the bottom half of the diagram. List every relevant feature, content piece, and service your product currently offers. Place each one beneath the tower it supports. A feature may support multiple towers. When a tower has no features beneath it, that gap is visible immediately. When features cluster beneath a few towers while most towers are unsupported, the diagram reveals a product that serves one narrow slice of users’ thinking while ignoring the rest.
9. Analyze gaps, overlaps, and opportunities
Walk through the completed diagram looking for patterns. Gaps (towers with no features below) are unmet needs — potential new functionality. Overlaps (multiple features beneath one tower) may indicate redundancy or over-investment. Tall towers with gaps are high-priority opportunities: many participants described rich, detailed thinking in an area the product does not address at all. Document each finding with the supporting evidence from the towers above.
10. Communicate findings and drive strategy
Translate the diagram into strategic recommendations. For each priority gap, describe what users think and do in that space, why the current product misses it, and what a solution might look like. Present the diagram to stakeholders as a map that shows where the product is aligned with how users think and where it is misaligned. Use the diagram as a reference artifact for roadmap planning, content strategy, and information architecture decisions. A mental model diagram remains valid for years if the domain is stable, because it maps human reasoning rather than reactions to a specific interface.
How AI changes this method
AI compatibility: partial — AI can substantially accelerate the most labor-intensive steps of mental model mapping: transcript coding, behavior extraction, and preliminary grouping. However, the listening sessions themselves require genuine human presence and rapport, and the final structure of the diagram depends on interpretive judgment that AI cannot perform reliably. An AI-generated mental model diagram would produce plausible-looking groupings that miss the subtle distinctions a trained researcher catches — the difference between a philosophy and a task, between a genuine feeling and a socially desirable response.
What AI can do
- Transcript coding: An LLM can process listening session transcripts and extract candidate behaviors — tasks, philosophies, and feelings — producing a first-pass inventory that a researcher then reviews and corrects. This cuts coding time from days to hours.
- Preliminary grouping: Given a list of extracted behaviors, AI can suggest initial tower groupings based on semantic similarity, giving the researcher a starting structure to refine rather than a blank canvas.
- Cross-participant pattern detection: AI can identify which behaviors appear across multiple participants and flag recurring themes, helping the researcher see patterns that might take multiple passes through the data to notice manually.
- Gap analysis automation: Once the diagram structure is established, an LLM can compare a product feature inventory against the tower structure and highlight towers with no corresponding features, accelerating the alignment step.
- Quote retrieval: When writing up findings, AI can search transcripts for the most illustrative verbatim quotes for each tower or mental space, saving the researcher from re-reading hundreds of pages.
What requires a human researcher
- Conducting listening sessions: The entire method depends on participants feeling heard and safe enough to describe their real reasoning. This requires a researcher who can build rapport in real time, follow unexpected threads, sit with silence, and resist leading the conversation. An AI cannot conduct these sessions.
- Distinguishing behavior types: The difference between a task (“checks prices at three stores”), a philosophy (“believes the cheapest option is always a bad deal”), and a feeling (“feels guilty about spending too much”) requires interpretive judgment about intent. AI classifies these with surface-level accuracy but misses the ambiguous cases where the distinction matters most.
- Final diagram structure: Deciding where one mental space ends and another begins, which towers belong together, and how to sequence the spaces requires deep familiarity with the raw data and interpretive skill. Automated clustering optimizes for word similarity, while human grouping optimizes for conceptual coherence — and the two do not always align.
- Strategic interpretation: A gap in the diagram is a structural observation. Whether that gap represents a real opportunity or an area users deliberately handle outside any product requires contextual understanding that only a researcher immersed in the data can provide.
AI-enhanced workflow
Before AI, mental model mapping was one of the most time-consuming qualitative methods in a researcher’s toolkit. Coding 15-20 transcripts into individual behaviors typically consumed 3-5 full working days — reading each transcript multiple times, marking behaviors, typing them out, and cross-checking for consistency. With an LLM processing transcripts in a first pass, a researcher can review and correct an AI-generated behavior inventory in 1-2 days rather than building it from scratch. The AI catches the obvious behaviors (clear task descriptions, explicit statements of preference) while the researcher focuses attention on the subtle ones (implied philosophies, emotions expressed through tone rather than words).
The grouping phase also benefits. Instead of starting with hundreds of sticky notes on a blank wall, the researcher begins with AI-suggested clusters and spends time rearranging, splitting, and merging them — a refinement task rather than a construction task. This shifts the researcher’s effort from mechanical work (reading, extracting, sorting) to interpretive work (judging, structuring, deciding), which is where human expertise adds the most value.
The final diagram and its strategic interpretation remain fully human. No AI can look at a mental model diagram and tell a product team “this gap matters more than that one because of what we heard in sessions 4, 7, and 12 about the emotional weight of this decision.” That synthesis requires having been present in the research, or at minimum having read the transcripts with the attentiveness of someone who will be responsible for the recommendations.
Works well with
- In-depth Interview (Di): Listening sessions in mental model mapping are a specific form of in-depth interview. The skills and recruitment approaches transfer directly, and existing interview transcripts can serve as input data for the mental model diagram.
- Persona Building (Ps): Mental model diagrams reveal the cognitive patterns that differentiate audience segments. These patterns produce richer, more behaviorally grounded personas than demographics-based approaches.
- Journey Mapping (Jm): A mental model diagram shows the full range of thinking about a domain; a journey map shows how a specific person moves through a specific experience. Using both together connects the broad cognitive context (mental model) with the sequential experience (journey).
- Card Sorting (Cs): Mental model mapping reveals how users organize their thinking about a domain; card sorting reveals how they expect information to be organized in an interface. The mental spaces from the diagram can inform the categories tested in a card sort.
- JTBD Switch Interview (Js): Mental model mapping captures the reasoning and philosophies behind behavior; JTBD interviews capture the causal forces behind a specific switching decision. Together they provide both the broad cognitive picture and the focused decision dynamics.
Example from practice
A health insurance company noticed that its mobile app had high download numbers but low engagement — people installed it, checked one thing, and never returned. The product team assumed the problem was missing features and proposed adding a claims tracker, a doctor search, and a virtual ID card.
Before building anything, the UX research team ran 18 listening sessions with policyholders, asking each person to describe how they think about and manage their health insurance. The resulting mental model diagram revealed seven mental spaces. Three of them — “understanding what my plan covers,” “deciding whether a medical cost is worth it,” and “preparing for a doctor visit” — were densely populated with behaviors but had zero product features mapped beneath them. The app’s existing features (claims status, plan summary, provider network) all clustered beneath two mental spaces that participants described briefly and without emotional weight.
The gap analysis showed that people spent most of their cognitive energy on questions the app never addressed: “Will this be covered?”, “How much will I actually pay?”, and “What should I tell the doctor about my plan?” The team redesigned the app’s roadmap to address these gaps, starting with a coverage estimator that answered the question “will this be covered?” in plain language. Six months after launch, the coverage estimator became the app’s most-used feature and weekly active users increased by 34%.
Beginner mistakes
Treating listening sessions as product interviews
The most common mistake is asking participants about the product instead of about the domain. Mental model mapping requires the researcher to set the product aside entirely and focus on how people think and act in the broader area of life the product serves. Questions like “what do you think of our app?” or “what features would you want?” produce product opinions, not mental models. The researcher must ask about the participant’s life and reasoning, and only map the product against those findings afterward.
Making towers too broad
Beginners often create large, vaguely named towers (“managing money,” “dealing with health”) that are essentially mental spaces, not towers. A tower should be specific enough that every behavior in it describes the same narrow activity. “Checking whether a specific medication is covered by insurance” is a tower. “Dealing with insurance” is a mental space that contains many towers. If a tower has more than 15 behaviors and they describe noticeably different activities, split it.
Skipping the bottom half of the diagram
Some teams build the top half of the diagram (the user’s mental model) but never map their product features against it. Without the bottom half, the diagram is an interesting but unactionable piece of research. The strategic value of mental model mapping comes from the alignment — seeing where the product matches people’s thinking and where it does not. Skipping this step is like building a map and never plotting your current location on it.
Coding only tasks and ignoring philosophies and feelings
Indi Young’s framework distinguishes three types of behavior: tasks, philosophies, and feelings. Beginners tend to extract only tasks because they are the easiest to identify — concrete actions described with verbs. But philosophies (beliefs that guide decisions) and feelings (emotional reactions) are often the most strategically valuable behaviors in the diagram. A philosophy like “believes cheaper always means worse quality” can explain an entire cluster of purchasing decisions that task-level analysis would miss.
Rushing the coding phase
The transcript coding phase is tedious and time-consuming, and beginners often skim transcripts rather than reading them carefully word by word. Skimming catches the obvious, explicitly stated behaviors but misses the implied ones — the philosophy mentioned in passing, the feeling expressed through a change in tone. A mental model diagram built from skimmed data will have the right overall shape but lack the depth that makes it strategically useful.
AI prompts for this method
4 ready-to-use AI prompts with placeholders — copy-paste and fill in with your context. See all prompts for mental model mapping →.