Instructions

Use this template to design and document each A/B test before you launch it.

It helps you clarify the hypothesis, variables, and metrics so you can draw meaningful conclusions and avoid testing chaos.

Test Overview

Test Name e.g. Homepage CTA wording test
Test Owner
Start Date
End Date (or test duration plan)
Status Planning / Running / Paused / Complete

Hypothesis

“If we [change X], it will result in [desired Y], because [reason based on data or insight].”

Hypothesis Statement:

Test Details

Element Being Tested e.g. CTA button copy, headline, layout
Variant A Current version (control)
Variant B New version (treatment)
Traffic Allocation 50/50, or otherwise
Target Audience e.g. New visitors, email subscribers, etc.
Platform / Tool Used e.g. Google Optimize, VWO, Webflow split test

Success Metrics

Primary Metric e.g. Conversion rate, CTR, leads captured
Secondary Metrics e.g. Bounce rate, time on page, scroll depth
Minimum Sample Size (use calculator to determine statistically valid threshold)
Minimum Detectable Effect (e.g. aiming for at least +10% change)

Risk & Guardrails

Risks Mitigation Plan
Drop in overall conversions Pause test if Variant B underperforms 20%+
Inconsistent tracking QA all events and tagging before launch

Results Summary (Post-Test)

Winning Variant Variant A / B / Inconclusive
Uplift Observed (e.g. +12.6% conversion lift)
Statistical Significance (e.g. 95%, p < 0.05)
Key Takeaways (What did we learn?)
Next Action (Rollout, retest, iterate?)

Tip: Always run one test per variable to isolate the cause of any change in performance.