Skip to content
Prompt

AI prompts for A/B testing: hypothesis generation, variant copy, and result analysis

Ready-to-use AI prompts for generating A/B test hypotheses, writing variant copy, analyzing results, and auditing your testing program.

How to use

Copy and paste into your AI assistant chat

These prompts help you use AI at every stage of A/B testing — from generating hypotheses grounded in research data, through writing variant copy, to analyzing results and producing stakeholder reports. Each prompt includes placeholders for your specific data and context.

Generate test hypotheses from research data

I am planning A/B tests for [product/page description]. Here is the data I have collected:

**Analytics data:**
[Paste key metrics: bounce rate, conversion rate, funnel drop-offs, time on page, exit pages]

**Heatmap/clickmap observations:**
[Describe what the heatmaps show: where users click, where they stop scrolling, what they ignore]

**Qualitative findings (usability tests, surveys, support tickets):**
[Paste 3-5 key quotes or observations]

**Current conversion rate for the page:** [X%]
**Average monthly traffic to the page:** [X visitors]

Based on this data, generate 8-10 testable hypotheses. For each hypothesis:
1. State it in the format: "Changing [element] to [new version] will [increase/decrease] [metric] because [reason]."
2. Estimate the expected impact (low / medium / high) based on industry benchmarks and the evidence strength.
3. Estimate the traffic needed to detect a meaningful effect.
4. Rank them by priority (impact vs. effort vs. traffic feasibility).

Focus on changes that address observed user behavior problems, not random cosmetic variations.

Generate variant copy for an A/B test

I am running an A/B test on [page type: landing page / product page / checkout / email].

**Current version of [element being tested]:**
[Paste the current headline / button text / product description / email subject line]

**Target audience:** [Describe the user: role, experience level, primary motivation]
**Desired action:** [What the user should do: sign up, click, purchase, read more]
**Tone:** [Professional / casual / urgent / reassuring / playful]
**Constraint:** [Character limit, brand guidelines, words to avoid]

**Why we are testing:** [Describe the problem — e.g., "Users scroll past the headline without engaging" or "Button click rate is 1.2% vs. industry average 3%"]

Generate 10 variant options. For each:
1. The new text.
2. A one-sentence rationale explaining what psychological principle or UX pattern it uses (e.g., specificity, social proof, loss aversion, clarity over cleverness).
3. A prediction of which user segment it might resonate with most.

Do not use clickbait, false urgency, or manipulative patterns. Prioritize clarity and relevance over cleverness.

Analyze A/B test results and write a stakeholder report

Here are the results of our A/B test. Analyze them and produce a stakeholder-ready report.

**Test name:** [Name]
**Hypothesis:** [State the original hypothesis]
**Primary metric:** [e.g., signup conversion rate]
**Guardrail metrics:** [e.g., purchase completion rate, bounce rate]
**Test duration:** [Start date – end date]

**Results:**
| Variant | Visitors | Conversions | Conv. Rate | Confidence |
|---------|----------|-------------|------------|------------|
| Control (A) | [N] | [N] | [X%] | — |
| Variant (B) | [N] | [N] | [X%] | [X%] |

**Segment data (if available):**
- Mobile: Control [X%], Variant [X%]
- Desktop: Control [X%], Variant [X%]
- New users: Control [X%], Variant [X%]
- Returning users: Control [X%], Variant [X%]

Produce a report with these sections:
1. **Executive summary** (3 sentences: what we tested, what happened, what we recommend).
2. **Detailed results** (statistical significance, effect size, confidence interval, practical significance assessment).
3. **Segment analysis** (do all segments agree? highlight any segment where the result reverses).
4. **Validity check** (was the sample size sufficient? was the test duration adequate? any external factors to note?).
5. **Recommendation** (ship, iterate, or discard — with reasoning).
6. **Next test suggestion** (based on what we learned, what should we test next?).

Audit an A/B testing program for common mistakes

I want to audit our A/B testing practices for common mistakes. Here is how we currently run tests:

**How we choose what to test:** [Describe: data-driven hypotheses? gut feeling? stakeholder requests?]
**Our testing tool:** [Name]
**How we determine sample size:** [Describe: calculator? rule of thumb? "when it feels like enough"?]
**How long we typically run tests:** [Days/weeks]
**How we decide a winner:** [Describe: significance threshold? manual review? tool auto-calls?]
**What we do after a test:** [Describe: document? move on? iterate?]
**Our average monthly traffic:** [X visitors]
**Our average conversion rate:** [X%]
**Number of tests we run per month:** [X]

Review our process against these known pitfalls:
1. Stopping tests too early (before statistical significance)
2. Not running tests for full business cycles
3. No pre-test hypothesis
4. Testing on pages with insufficient traffic
5. Peeking at results and making premature decisions
6. Not segmenting results by device and user type
7. Testing trivial elements before high-impact ones
8. Not accounting for external factors (seasons, campaigns)
9. Not documenting learnings from losing tests
10. Running overlapping tests on the same traffic

For each pitfall, assess whether our process is vulnerable, and if so, give a specific recommendation to fix it.