Skip to content

How to read heatmaps and click maps: a practical UX research guide

What are heatmaps and click maps?

Heatmaps are a quantitative visualization technique that aggregates user interactions on a webpage or app screen and displays them as a colored overlay on the interface. Warm colors (red, orange) mark high-activity zones — where many users clicked, scrolled to, or hovered — while cool colors (blue, green) mark areas users ignored or never reached. The family includes click maps (where users tapped or clicked), scroll maps (how far users scrolled before leaving), move or hover maps (cursor pathways on desktop), attention maps (a blend of scroll depth and time spent), and rage-click maps (frustration signals). Heatmaps compress thousands of sessions into a pattern the team can read at a glance, which makes them the fastest tool for spotting where the interface and the user expectations diverge.

What question does it answer?

  • Where do users actually click on this page, and which interactive elements are they ignoring?
  • How far down the page do users scroll before they leave, and is the critical content above or below that line?
  • Are users trying to click elements that are not interactive (dead clicks), and if so, which ones?
  • Where do users show frustration signals (rage clicks, repeated clicks in one spot)?
  • Do desktop and mobile users behave differently on the same page, and where do those differences appear?
  • Which sections of a long page actually hold attention, and which get skimmed past?

When to use heatmaps

  • When a page is underperforming on a clear conversion or engagement goal and the team needs to localize the problem before redesigning anything.
  • When analytics report a high bounce rate or low scroll-through but the data alone does not explain which part of the page caused users to leave.
  • When stakeholders disagree about what users care about on a page — a heatmap of real behavior settles the argument with evidence.
  • When auditing a landing page, pricing page, sign-up flow, or in-product screen for friction points missed in lab usability testing.
  • When validating a redesign after launch by comparing the click and scroll maps of the old and new versions.
  • When investigating mobile-specific issues, since thumb-reach zones and viewport differences produce behavior that desktop testing misses.

Not the right method when traffic is too low for stable patterns. Aim for at least 1,000 page views before reading patterns seriously, and 2,000–3,000 for segment splits. Heatmaps also do not explain why users behave the way they do — a hot click cluster can mean “users found it” or “users are frustrated and clicking repeatedly,” and only session replay or follow-up research can tell which.

What you get (deliverables)

  • Click or tap heatmap with click counts per element and click-share for major CTAs.
  • Scroll heatmap showing the percentage of visitors who reached each fold and the steepest drop-off points.
  • Move or attention heatmap showing where the user’s gaze lingered before they acted.
  • Rage-click and dead-click report tied to a likely cause for each pattern.
  • Segmented comparison: same page split by device, traffic source, or user state.
  • Annotated findings document pairing each pattern with a hypothesis, an action, and a metric to watch.

Participants and duration

  • Participants: No recruited participants — heatmaps run on live traffic. Aim for 1,000+ page views per page; 2,000–3,000 for segmentation.
  • Setup time: 15 minutes to 1 hour to install the tracking code and configure pages.
  • Data collection window: 1–4 weeks depending on traffic volume.
  • Analysis time: 1–3 hours per page; 1–2 days for a 5–10 page audit.
  • Predictive (AI) variant: seconds, from a screenshot or Figma file, no live traffic required.

How to read a heatmap (step-by-step)

1. Pick the page and the question

Heatmaps reward focus. Start with a single page and a single question — “Are users clicking the primary CTA on the pricing page?” or “Where are visitors dropping out of the long-form landing page?” A vague goal like “let’s see what’s happening on the homepage” produces patterns no one knows how to act on.

2. Install the tool and set up tracking

Add the tracking code from a heatmap tool (Hotjar, Microsoft Clarity, Mouseflow, Contentsquare) to the pages you want to study. For SPAs and pages with dynamic content, enable interactive heatmap mode so the recording follows menu opens, tab switches, and state changes.

3. Wait for enough data

A click map built on 80 sessions reflects whatever a handful of visitors happened to do. Wait until the page has at least 1,000 visits, and 2,000–3,000 before reading segment splits. While waiting, write down predictions for what the heatmap will show — turn the analysis into a hypothesis check.

4. Read the click heatmap first

Look at three things. Where is the largest red cluster? Is it on the primary CTA, or somewhere unexpected? Which interactive elements show very little activity? Are there dead clicks on non-interactive elements like headings, images, or icons? Each signal points to a specific design fix.

5. Read the scroll heatmap next

Find the depth at which 50% of visitors are still on the page, then where 25% are. Mark the elements that sit above and below those lines. If a key benefit, social proof block, or secondary CTA sits in a zone where only 20% of users reach it, that section is effectively invisible. Watch for false bottoms — horizontal lines, large images, or wide whitespace that signal “the page ends here” when more content follows.

6. Read the move, attention, and rage-click maps

Use these as supporting evidence, not as primary signals. Move maps suggest hesitation but cursor movement does not always track gaze. Rage-click and dead-click reports are higher priority — they often point to bugs, broken expectations, or missing functionality, and any cluster is a candidate for immediate investigation.

7. Segment before concluding

A combined desktop-and-mobile heatmap blends two completely different behaviors. Always split by device. If the page receives campaign traffic, also split by source. If patterns differ between segments, write up the segments separately and design fixes for each.

8. Validate hot zones with session replay

Heatmaps tell you where something happens; they do not tell you why. For every important hotspot or anomaly, watch 5–10 session recordings of users who interacted with that zone. If a click cluster on a heading turns out to be users trying to expand a non-existent dropdown, that is an interaction problem the heatmap alone could not explain.

9. Turn findings into a prioritized action list

For each pattern, write a one-line finding, a hypothesis, and a proposed action. Rank actions by expected impact and effort, then ship the highest-impact change first and re-measure after 1–2 weeks of new traffic.

How AI changes this method

AI compatibility: full — AI can both replace the live-traffic step (predictive heatmaps generated from a static mockup before launch) and accelerate the interpretation step (pattern detection, segment comparison, natural-language summaries). Humans still own the question framing, the segmentation choices, and the link from a heatmap pattern to a product decision.

What AI can do

  • Predictive (pre-launch) heatmaps: Tools like Attention Insight, EyeQuant, and Brainsight use neural networks trained on real eye-tracking data to generate attention heatmaps from a screenshot or Figma file in seconds. Vendors report 90–96% agreement with lab eye-tracking studies — enough to compare design variants before a single visitor sees the page.
  • Automatic anomaly detection on live heatmaps: Modern analytics platforms flag rage-click clusters, dead-click zones, and unusual drop-off points without the analyst scanning every page manually.
  • Natural-language summaries of patterns: An LLM can read an exported click map and produce a written finding like “67% of clicks land on the primary CTA, but the secondary CTA below the fold receives less than 2% — visibility appears to be the bottleneck.”
  • Segment comparison at scale: AI can run the same analysis across dozens of segments and surface only the ones that diverge meaningfully from the average.
  • Linking patterns to outcomes: ML models can correlate hot zones and scroll thresholds with downstream conversion data, telling the team which interaction patterns predict conversion and which are noise.

What requires a human researcher

  • Choosing the question: A heatmap of the homepage with no specific question produces an interesting picture and zero decisions.
  • Interpreting why a hotspot exists: A red cluster on a heading can mean “users love this content” or “users are clicking because they expect it to expand and nothing happens.” Distinguishing the two requires session replay or qualitative research.
  • Deciding which fix to ship: When the heatmap reveals three problems, the choice depends on business priority, engineering effort, and strategic context.
  • Spotting tracking errors: A heatmap that looks unusual may reflect a broken tracking script or bot traffic, not a design problem. Recognizing that the data itself is wrong requires technical judgment.

AI-enhanced workflow

Before AI, a heatmap study took 2–4 weeks: install the tool, wait for traffic, manually scan each page, manually filter by segment, and write up findings. Predictive eye-tracking did not exist outside expensive lab studies, so designers shipped a layout and hoped attention matched their intent.

With AI in the loop, the workflow compresses dramatically. A designer can drop a Figma file into Attention Insight and get an attention heatmap in 30 seconds, comparing two layout variants before either is built. After launch, the live heatmap tool flags rage-click clusters automatically, and an LLM produces a first-draft findings summary the analyst then refines. The catch is that predictive heatmaps model visual salience, not intent — they can tell you a button is well-placed for visibility but not whether users will trust the offer enough to click. Live heatmaps and qualitative research remain necessary for the steps AI cannot reach.

Tools

Live heatmap platforms: Hotjar, Microsoft Clarity (free, includes session replay), Mouseflow, Crazy Egg, Contentsquare, FullStory.

Mobile-app specific: UXCam, Smartlook, Heap.

Predictive AI heatmaps: Attention Insight, EyeQuant, Brainsight, Neurons Predict — generate attention maps from a screenshot or Figma file before launch.

Session replay companions: Hotjar, FullStory, Microsoft Clarity, LogRocket — paired with a heatmap, replays answer the “why.”

Validation and downstream: Google Analytics 4, Mixpanel, VWO, Optimizely.

Works well with

  • Analytics / Clickstream (An): Analytics tells you the conversion rate dropped; the heatmap shows where on the page it dropped.
  • A/B Testing (Ab): Heatmaps generate hypotheses; A/B testing proves whether the proposed fix moves conversion.
  • Funnel Analysis (Fa): Funnel analysis pinpoints the step where users drop out; the heatmap of that step reveals the in-page reason.
  • Usability Testing Moderated (Ut): Usability testing explains why users struggle on a small sample; heatmaps confirm whether the same struggles appear at scale.
  • Survey (Sv): When the heatmap shows an unexpected pattern, an on-page survey targeted at users who exhibited that behavior can ask directly what they were trying to do.

Example from practice

A consumer fintech company saw its sign-up conversion rate on the main landing page stall at 2.1% after months of incremental improvements. The team had already tested headline variants, button colors, and form length without finding lift. They installed Hotjar and ran heatmap and session replay tracking on the landing page for two weeks.

The click heatmap showed something the team had missed: 14% of all clicks on the page landed on a customer testimonial card in the middle of the page, far more than the secondary “Learn more” link next to it. The card looked like a button — it had a subtle drop shadow, a slightly raised border, and a name and photo that resembled a profile link — but it was not interactive. Session replays confirmed users were trying to click into the testimonial expecting a fuller story. The scroll heatmap also showed that only 38% of visitors reached the “How it works” section that explained the value proposition, because the page had a horizontal divider at 60% depth that created a false bottom.

The team made two changes: they made the testimonial cards expand to a full case study on click, and they removed the divider that was causing the false-bottom effect, restructuring the “How it works” section to sit higher on the page. Within three weeks of new traffic, sign-up conversion rose from 2.1% to 2.7%, a 28% relative improvement that outperformed every previous test.

AI prompts for this method

4 ready-to-use AI prompts with placeholders — copy-paste and fill in with your context. See all prompts for heatmap analysis →.