Use this template to design and document each A/B test before you launch it.
It helps you clarify the hypothesis, variables, and metrics so you can draw meaningful conclusions and avoid testing chaos.
| Test Name | e.g. Homepage CTA wording test |
|---|---|
| Test Owner | |
| Start Date | |
| End Date | (or test duration plan) |
| Status | Planning / Running / Paused / Complete |
“If we [change X], it will result in [desired Y], because [reason based on data or insight].”
Hypothesis Statement:
| Element Being Tested | e.g. CTA button copy, headline, layout |
|---|---|
| Variant A | Current version (control) |
| Variant B | New version (treatment) |
| Traffic Allocation | 50/50, or otherwise |
| Target Audience | e.g. New visitors, email subscribers, etc. |
| Platform / Tool Used | e.g. Google Optimize, VWO, Webflow split test |
| Primary Metric | e.g. Conversion rate, CTR, leads captured |
|---|---|
| Secondary Metrics | e.g. Bounce rate, time on page, scroll depth |
| Minimum Sample Size | (use calculator to determine statistically valid threshold) |
| Minimum Detectable Effect | (e.g. aiming for at least +10% change) |
| Risks | Mitigation Plan |
|---|---|
| Drop in overall conversions | Pause test if Variant B underperforms 20%+ |
| Inconsistent tracking | QA all events and tagging before launch |
| Winning Variant | Variant A / B / Inconclusive |
|---|---|
| Uplift Observed | (e.g. +12.6% conversion lift) |
| Statistical Significance | (e.g. 95%, p < 0.05) |
| Key Takeaways | (What did we learn?) |
| Next Action | (Rollout, retest, iterate?) |
✅ Tip: Always run one test per variable to isolate the cause of any change in performance.