How to run a True Intent Study: a practical guide with AI prompts
A True Intent Study is an intercept-based survey method that captures live visitors on a website or app, asks what they came to do, lets them complete their task, and then asks whether they succeeded and what difficulties they encountered. The method produces a direct link between visitor intent and task outcome — measured on real traffic, in real context, with no lab setup or separate recruitment. MeasuringU classifies it as a Mixed Use method because it combines behavioral data (what visitors actually do) with attitudinal data (what they report about their experience).
What question does it answer?
- Who is visiting our website or app — and does our visitor profile match the customer profile we designed for?
- What are the top tasks visitors come to accomplish, and what proportion of visitors falls into each task category?
- For each major task, what percentage of visitors successfully complete it?
- Where in the site do visitors encounter difficulties, and what types of problems do they report?
- How do visitors rate the overall experience (usability, credibility, appearance, likelihood to return)?
- How does task completion and satisfaction change over time as we ship design improvements?
When to use
- During the optimization phase of a live website or app, when you need to identify which tasks are failing and where visitors get stuck.
- When you need to benchmark the current experience before a redesign, so you have baseline metrics to measure improvement against.
- When you want to understand who your visitors actually are versus who you assume they are.
- When analytics shows a drop-off at a specific step but does not explain why.
- When you need to prioritize the backlog by impact: combining top-task frequency with task failure rate reveals which problems affect the most people.
- When you want to recruit participants for follow-up studies from a pool of qualified, real visitors.
Not the right method when you need to understand why people buy or switch products (use a JTBD interview), when you need to observe detailed task-level behavior with think-aloud (use usability testing), when your site has very low traffic, or when you are testing a prototype that is not yet live.
What you get (deliverables)
- Top-task inventory: a ranked list of visitor intents with traffic percentages
- Task completion rates: success rate per task, measured by self-report
- Failure diagnosis: open-ended descriptions of where and why visitors failed, grouped by theme
- Visitor profile: demographic breakdown of actual site visitors
- Experience benchmark: standardized scores (SUPR-Q, NPS, SUS)
- Key-driver analysis: which features have the biggest impact on satisfaction
- Participant pool: contacts for follow-up usability tests or interviews
Participants and timing
True Intent Studies do not require traditional recruitment. Participants are live visitors intercepted on the site. Target 300-600 completed responses. For sites with multiple major tasks, aim for 100 responses per task journey, totaling 400+.
Run the study for at least one full week to capture weekday and weekend patterns. The intercept typically invites 5-15% of visitors; actual acceptance rates range from 3% to 10%. The survey takes 3-5 minutes per participant. Total timeline: 1-2 days setup, 1-4 weeks collection, 2-5 days analysis.
How to run a True Intent Study
1. Define what you need to learn
Start with the four core questions: Who is visiting? Why are they coming? Did they succeed? What do they think? Decide which is your primary objective.
2. Design the intercept placement
Place the intercept on high-traffic entry pages for a general study, or on specific flow pages for a targeted study. Place it at the beginning of a task, not the end — you need to capture both successful and unsuccessful visitors.
3. Configure the intercept
Insert a JavaScript snippet on target pages. Configure the invitation rate based on traffic and desired sample size, set cookie-based deduplication, add a 2-5 second delay, and ensure mobile compatibility. The “new window” method performs better than emulator or overlay approaches.
4. Write the survey in three phases
Phase 1 (before task): “Why did you visit our site today?” with predefined task options plus open-ended textbox.
Phase 2 (after task): “Were you able to complete what you came to do?” (yes/no/partially) with open-ended follow-up for failures.
Phase 3 (experience metrics): 2-4 rating scales (ease of use, satisfaction, NPS) plus 2-3 demographic questions. Total: no more than 10 questions.
5. Write intercept copy that maximizes acceptance
Explain that visitors complete their task first, then answer a short survey. Use “Share your feedback” rather than “Take a survey.” Mention the time (“about 3 minutes”).
6. Pilot for 24-48 hours
Run at low invitation rate (2-3%). Check survey loading, response rates, open-ended quality, and whether predefined tasks cover actual visitor reasons.
7. Run the full study
Increase invitation rate and run for at least one week. Monitor daily. Adjust if “Other” exceeds 20% of intent responses.
8. Analyze: connect intent to outcome
Tabulate top-task distribution. Calculate per-task completion rates. Code failure descriptions into themes. Cross-tabulate by segment. Run key-driver analysis. Compare against previous benchmark.
9. Report and prioritize
Build a priority matrix: task frequency multiplied by failure rate equals impact score. For each priority task, include failure themes, representative quotes, and specific page locations.
10. Set up continuous measurement
Run continuously or quarterly. Use SUPR-Q or SUS scores as longitudinal benchmarks. Keep the JavaScript intercept in place and rotate survey focus.
How AI changes this method
AI compatibility: partial — AI automates open-ended coding and accelerates analysis, but study design and strategic prioritization require human judgment.
What AI can do
- Open-ended response coding: LLMs theme-code hundreds of responses in minutes
- Survey draft generation from study objectives and site context
- Key-driver analysis automation within survey platforms
- Longitudinal trend detection across measurement periods
- Response quality filtering (gibberish, spam detection)
What requires a human
- Intercept placement decisions based on information architecture
- Task taxonomy design at the right granularity level
- Strategic prioritization of which problems to fix first
- Cross-referencing survey data with behavioral analytics
AI-enhanced workflow
The biggest bottleneck — open-ended analysis — drops from 2-4 days to half a day with LLM-based coding. Survey design benefits from AI drafts, though question wording and task list granularity still need human review. The intercept configuration remains manual. AI-powered dashboards can surface real-time themes during data collection.
Tools
Intercept and survey: MUIQ (MeasuringU), Qualtrics Site Intercept, Hotjar, Helio, UserZoom/UserTesting. Analytics: Google Analytics, FullStory, Hotjar session recordings. Analysis: Displayr, Thematic, Dovetail, Excel/Google Sheets. AI-assisted: Claude/ChatGPT, Qualtrics XM Discover.
Works well with
- Analytics / Clickstream (An): Intent data from the study explains what analytics cannot — why visitors behave the way they do.
- Usability Testing (Ut/Ur): True Intent identifies failing tasks; usability testing diagnoses exactly where and why.
- Survey (Sv): For deeper attitudinal research, a broader survey can follow up with the visitor pool recruited through the intercept.
- Funnel Analysis (Fa): Funnels show where drop-off happens; True Intent explains why.
- Heatmaps / Click Maps (Hm): Overlaying intent segments on heatmaps reveals whether different visitor types interact with pages differently.
Example from practice
A government agency managing an online tax filing portal saw that 40% of visitors abandoned the site within three minutes. Analytics showed the drop-off but could not explain it. The team assumed the tax forms were too complex, and the roadmap included a form simplification project estimated at six months.
The UX team ran a two-week True Intent Study, collecting 1,200 responses. The top-task analysis revealed that 35% of visitors came to check the status of a previously filed return — a task the team had not considered a priority. Of those visitors, only 28% reported successfully finding their filing status. The open-ended responses showed a consistent pattern: visitors could not find the “Check Status” link because it was buried under the “File a Return” section.
The team moved the “Check Status” link to a prominent homepage position and added a status-check shortcut to navigation. The change took two weeks. A follow-up study showed the completion rate rose from 28% to 81%, and three-minute abandonment dropped from 40% to 22%. The six-month project was deprioritized.
Beginner mistakes
Making the survey too long
Every additional question reduces completion rate. Keep it to 10 questions maximum, 5 minutes total.
Placing the intercept at the end of the task
Intercept on entry pages to capture both successful and unsuccessful visitors. End-of-flow placement only captures completers.
Using task categories that are too vague
“Browsing” tells you nothing. Build categories from actual visitor language using a pilot with open-ended responses.
Ignoring the open-ended data
Quantitative metrics tell you what; open-ended responses tell you why. Use AI for the first coding pass, but read representative quotes yourself.
Running the study once and never again
The real value comes from longitudinal tracking. Keep the intercept in place and measure regularly.
AI prompts for this method
4 ready-to-use AI prompts with placeholders — copy-paste and fill in with your context. See all prompts for True Intent Studies →.