Sample Size Calculator
Calculate the required sample size per variation to detect a meaningful difference in your A/B test with statistical confidence.
Enter Your Parameters
Your current conversion rate
Relative improvement you want to detect (e.g., 20% = 5% → 6%)
Typically 90%, 95%, or 99%
Probability of detecting a true effect (typically 80%)
How It Works
This calculator uses the standard formula for comparing two proportions, accounting for the desired confidence level and statistical power. It determines how many visitors each variation needs to reliably detect your specified minimum effect.
The calculation is based on normal approximation and assumes a two-tailed test, which is appropriate for most A/B testing scenarios where you don't know in advance whether the variant will perform better or worse.
Interpretation Guide
- •Confidence Level: Higher confidence reduces false positives but requires more traffic
- •Statistical Power: Higher power reduces false negatives (missing real effects)
- •Smaller MDE: Detecting smaller improvements requires significantly more traffic