The Two Types Of AB Testing Statistics Errorsīecause we are testing a specific idea (that our results show version B is better than version A), only two outcomes are possible when we are interpreting an AB test: That’s why hypothesis testing is all about trying to reject the Null Hypothesis. However, in AB testing statistics, we do that by proving that H0 is NOT true.
The Alternative Hypothesis is that you do have strong enough evidence to show that the new version of your webpage is better. H1 is an alternative interpretation to H0. You accept the Null Hypothesis when you do not have strong enough evidence to say that the new version of your webpage is definitely better than the original. The Null Hypothesis states that there is no real effect behind the data your test has produced. There are two possible explanations for the results - H1, the “ Alternative Hypothesis”- and H0, the “ Null Hypothesis.” Unless the conversion rates of page A and page B are exactly the same, our test will produce a winner – but how do we know the winner is really better? In an AB test, we use experimental data to evaluate two versions of our webpage. Rejecting the Null Hypothesis in an AB Test However, 95% of the time, your results will not be due to chance. If your AB test results are statistically significant at a level of 95% they could still be due to random variation once in every 20 times. Marketers and CRO experts wait for a pre-determined level of Confidence before declaring a winning variation. Statistical significance is a way of making sure that your results are reliable before jumping to any conclusions.
You even run the risk of decreasing conversions. If your results are interpreted incorrectly, you run the risk of applying unproven changes to your website. Why Does Statistical Significance Matter?Įven really positive results can be misleading. In other words, you are not likely to have produced the two different conversion rates for page A and page B unless something concrete has changed. Results have statistical significance when they are very unlikely to have occurred due to random variations. “Significance” is the most important concept in AB testing statistics. What is “Significance” in AB Testing Statistics? We then need to decide if the results are reliable and whether they tell us anything meaningful. The different conversion rates we get for page A and page B show us which of our versions has performed better during the test. When we perform an AB test (which is a form of “hypothesis testing”) we create two competing versions of a webpage and show them to two groups of randomly selected people. The new version (page B) will have different buttons, web forms, notifications or any other variation we can think of. Or, if you are not careful, you could make decisions based on a false positive. If you skip this step, there is a chance you could miss an important opportunity. That means analysing the statistics to make sure your results are not due to chance. It’s essential to the success of your AB tests that your data is understood properly. The Practical Guide To AB Testing Statistics: Conclusions.The Hybrid Approach to AB testing Statistics.Statistical Controversy: Frequentist VS Bayesian.Calculating The Statistical Significance of an AB Test.How Does Sample Size Affect Statistical Significance?.Reducing Type 1 and 2 Errors in AB Test Statistics.What is “Significance” In AB Testing Statistics?.The Practical Guide To AB Testing Statistics