These prompts help you use AI at key stages of first click testing — writing task scenarios that avoid on-screen labels, analyzing click distribution data to identify misleading elements, and comparing design variants to decide which layout guides users most effectively.
Generate first-click test task scenarios
I am running a first click test on this page design. Here is the context:
**Page type:** [Homepage / product page / dashboard / settings / checkout / etc.]
**Key elements on the page:** [List the main navigation items, buttons, sections, and their approximate positions]
**Target audience:** [Who the users are]
I need task scenarios for these user goals:
1. [User wants to do X — the correct target is Y element]
2. [User wants to do X — the correct target is Y element]
3. [User wants to do X — the correct target is Y element]
[Add 2-5 goals]
For each goal, write a task scenario that:
1. Describes the user's need in one sentence.
2. Does NOT use any text that appears as a label, button text, or heading on the page.
3. Is specific enough that there is one clearly correct first click.
Also flag any goals where avoiding on-screen text is especially difficult.
Analyze first-click test results
Here are the results of our first click test. Analyze them and identify design problems.
**Page description:** [Brief description of the page and its purpose]
**Task results:**
| Task | Correct target | Success rate | Avg time to click (sec) | Top misclick targets |
|------|---------------|-------------|------------------------|---------------------|
| [Task 1] | [Element] | [X%] | [X] | [Element A: X%, Element B: X%] |
| [Task 2] | [Element] | [X%] | [X] | [Element A: X%, Element B: X%] |
[Continue for all tasks]
For each task with success below 80%, provide:
1. What is likely drawing clicks away from the correct target (visual weight, position, label clarity).
2. A specific design recommendation to improve the success rate.
3. Whether the issue is likely a labeling problem, a visual hierarchy problem, or a positioning problem.
Also provide an overall assessment: is this page ready for development, or does it need another design iteration?
Compare two design variants from first-click data
We tested two design variants of the same page. Compare the results and recommend which to proceed with.
**Variant A:**
| Task | Success rate | Avg time (sec) | Main misclick target |
|------|-------------|---------------|---------------------|
| [Task 1] | [X%] | [X] | [Element: X%] |
[Continue]
**Variant B:**
| Task | Success rate | Avg time (sec) | Main misclick target |
|------|-------------|---------------|---------------------|
| [Task 1] | [X%] | [X] | [Element: X%] |
[Continue]
**Design differences between variants:**
[Describe what changed: button placement, label text, layout, visual hierarchy]
Provide:
1. Which variant performed better overall and per task.
2. What specific design choices in the winning variant likely caused the improvement.
3. Whether any tasks performed worse in the overall winning variant (and what to do about it).
4. A recommendation: ship Variant [A/B], iterate further, or test a third option.