How to run a funnel analysis: find drop-offs and fix conversion
What is funnel analysis?
Funnel analysis is a quantitative research method that measures how users progress through a defined sequence of steps toward a conversion goal — and where they stop. By mapping a user flow as a series of ordered events, funnel analysis calculates the conversion rate at each step and reveals the exact points where users abandon the process. The method turns vague concerns about “low conversion” into precise, actionable data: not just “we’re losing users” but “we’re losing 43% of users between step 2 and step 3, and the problem is twice as bad on mobile.”
What question does it answer?
- What percentage of users who start a flow actually complete it?
- At which step do the most users abandon the process?
- How does conversion differ across user segments (mobile vs. desktop, new vs. returning)?
- Has a recent change improved or degraded conversion at a specific step?
- How long do users take between steps, and where do delays suggest friction?
- Which entry points produce the highest and lowest funnel completion rates?
When to use
- When the product has a critical multi-step flow and the team needs to know where users get stuck.
- When conversion rates are below target and the team needs to prioritize which step to fix first.
- When measuring the impact of a design change on a specific flow.
- When optimizing for revenue — even a small improvement at the highest-volume drop-off step can have significant impact.
- When generating hypotheses for qualitative investigation.
- When setting up ongoing monitoring with alerts for conversion regressions.
Not the right method when the user journey is non-linear — for content browsing or marketplace exploration, use path analysis instead. Funnel analysis also cannot explain why users drop off; pair it with usability testing, session recordings, or surveys for that.
What you get (deliverables)
- Step-by-step conversion report with drop-off rates.
- Drop-off ranking prioritized by absolute user loss.
- Segment comparison across device, user type, traffic source, geography.
- Time-between-steps analysis flagging friction points.
- Before/after comparison with significance testing.
- Hypothesis document with recommended qualitative follow-ups.
Participants and duration
- Participants: No recruited participants — uses live product data. At least 200-500 users through the funnel for stable rates.
- Data window: 1-2 weeks minimum.
- Setup time: 1-3 days with existing tracking; 1-2 weeks for new instrumentation.
- Analysis time: 1-3 days for a focused review.
How to run a funnel analysis (step-by-step)
1. Identify the flow and define 3-7 steps
Choose a flow with a clear start and end. Map each step to a trackable event with a success criterion. Too many steps create noise; too few miss important transitions.
2. Verify event tracking
Walk through the flow and confirm every event fires correctly. Check for double-counting, missing events on specific platforms, and incorrect event ordering.
3. Build the funnel in your analytics tool
Set the step sequence, conversion window (1-24 hours for checkout, 7-30 days for onboarding), and global filters. Test with known data.
4. Identify the biggest drop-offs
Look at both absolute drop-off (most users lost) and relative drop-off (lowest conversion percentage). These are your priority targets.
5. Segment the funnel
Split by device, user type, traffic source, geography. A 70% overall conversion might hide 45% on mobile — segment analysis reveals the real problem.
6. Analyze time between steps
Steps with unusually long median times suggest confusion or friction. Steps with very short times may indicate users clicking through without reading.
7. Compare across time periods
For before/after analysis, use significance testing (z-test for proportions) to confirm changes are real, not noise.
8. Generate hypotheses and recommend follow-ups
For each drop-off, list probable causes and recommend a qualitative method to investigate (session recordings, usability test, survey).
9. Set up monitoring
Create a permanent funnel dashboard with alerts triggered when conversion drops below defined thresholds.
How AI changes this method
AI compatibility: full — AI can automate funnel configuration, anomaly detection, segment analysis, and hypothesis generation.
What AI can do
- Auto-suggest funnel steps from event data.
- Detect drop-off anomalies and alert the team in real time.
- Run segment analysis at scale across dozens of dimensions automatically.
- Generate drop-off hypotheses based on common UX patterns and session metadata.
- Natural-language queries — non-technical team members ask questions and get funnel charts.
- Predictive scoring — ML models score users’ likelihood of abandoning the next step.
What requires a human researcher
- Defining which funnels matter based on business model and strategy.
- Validating tracking accuracy through hands-on testing.
- Interpreting why users drop off — requires qualitative investigation.
- Prioritizing fixes — weighing business impact, engineering effort, and strategic priorities.
AI-enhanced workflow
Before AI, funnel analysis required manual configuration, spreadsheet exports for segment comparisons, and periodic re-checks. With AI agents, the analyst describes the flow, the tool builds the funnel, highlights significant drop-offs, and generates a draft report. The funnel then monitors itself continuously.
Tools
Analytics with funnels: Amplitude, Mixpanel, GA4, PostHog, Heap, Pendo.
Conversion tools: Glassbox, Contentsquare, UXCam.
Session recording: Hotjar, FullStory, LogRocket, Smartlook.
A/B testing: Optimizely, VWO, Statsig.
AI-assisted: Amplitude AI, Mixpanel Spark, ChatGPT / Claude.
Works well with
- Analytics / Clickstream (An): Analytics identifies which flows to funnel-analyze; the funnel provides precise step-by-step measurement.
- Heatmaps / Click Maps (Hm): The funnel shows a 35% drop-off at payment; the heatmap shows where users click on that page.
- Usability Testing Moderated (Ut): The funnel identifies the drop-off; usability testing reveals why.
- A/B Testing (Ab): The funnel measures the problem; A/B testing measures whether the solution works.
- Survey (Sv): A post-abandonment survey asks users directly why they stopped.
Example from practice
An online education platform found that only 18% of users completed the enrollment flow. A six-step funnel revealed the largest drop-off (41%) between “view pricing” and “begin checkout.” Segment analysis showed this was 52% among paid-ad users but only 29% among organic — suggesting ad users had different pricing expectations. Time analysis showed converters spent 45 seconds on the pricing page while abandoners spent only 8 seconds.
The team added a plan comparison table, testimonials, and a money-back badge to the pricing page. Post-change conversion from pricing to checkout rose from 59% to 71% (p < 0.01), and the ad/organic gap narrowed from 23 to 9 percentage points.
Beginner mistakes
Defining too many or too few steps
Fifteen steps produce unprioritizable noise. Two steps show only the overall rate with no diagnostic value. Use 3-7 steps, each representing a meaningful action where abandonment is possible.
Not segmenting the funnel
A 25% overall conversion might be 40% desktop and 12% mobile. Without segmentation, you optimize for an average user who does not exist.
Confusing funnel order with user behavior
If some users go A → C → B instead of A → B → C, the funnel shows a false drop-off at B. Validate the assumed sequence with path analysis data first.
Treating funnel data as the diagnosis
A 40% drop-off is a symptom. Do not jump to solutions without first investigating the cause through session recordings, usability tests, or surveys.
Using too short a conversion window
A 1-hour window for a flow that users typically take 2-3 days to complete shows artificially low conversion. Set the window based on actual completion time distribution.
AI prompts for this method
3 ready-to-use AI prompts with placeholders — copy-paste and fill in with your context. See all prompts for funnel analysis →.