On the fallacy of A/B testing

A/B testing is a commonly used method for determining the effectiveness of different variations of a product or marketing campaign. It involves randomly splitting a sample of users into two groups and exposing each group to a different version of the product or campaign. However, despite its widespread use, A/B testing has several limitations that make it a flawed methodology, particularly when compared to bespoke machine learning models. In this blog post, we will explore some of these limitations and discuss why a certain type of bespoke machine learning model called holistic business measurement and optimization (HBMO) model is a better alternative.
Limited sample size
One of the main limitations of A/B testing is that it requires a relatively large sample size to achieve statistically significant results. This can be a problem for companies with a limited user base, as it can take a long time to collect enough data to make meaningful comparisons. This can also be an issue for businesses that are trying to optimize for rare events, such as high-value conversions, as it may take a long time to collect enough data to make a meaningful comparison.
Limited scope
A/B testing is also limited in terms of the scope of the comparison. It only allows for a comparison between two versions of a product or campaign and doesn't allow for the comparison of multiple variations or the examination of interactions between different factors. This can make it difficult to identify the true cause of a change in user behavior and can lead to suboptimal decisions. What's more, in A/B testing you need to make sure that only the factor you are interested in varies and that everything else stays more or less fixed. This is a big ask since mother nature has little to no respect for our marketing wishes. On any given day, everything varies with time and location. As such, the ideal A/B testing setup is at best wishful thinking.
Local optimization
A/B testing typically focuses on optimizing one metric at a time, such as click-through rate, conversion rate, or revenue. However, this can lead to local optimization, where a change that improves one metric might produce a negative impact on another metric. For example, increasing the click-through rate might result in a decrease in the conversion rate. This way it's easy to miss out on capitalizing on positive synergies and mitigating negative ones.
Bespoke machine learning models for business
Machine learning models in the shape of HBMO can overcome many of these limitations. They can analyze large amounts of data and identify patterns that are not immediately apparent from A/B testing. They can also be used to optimize multiple metrics simultaneously and account for interactions between different factors. Additionally, HBMO models can be used for predictive modeling, which can help to identify which users are most likely to convert and target them with personalized campaigns.
Conclusion
As we have so modestly hinted at, A/B testing is a flawed methodology that has several limitations, particularly when compared to HBMO models. While A/B testing can be useful for making simple comparisons, it is not well suited to more complex optimization problems. HBMO models, on the other hand, can analyze large amounts of data, identify patterns, and optimize multiple metrics simultaneously. Consequently, for businesses looking to drive innovation and gain a competitive advantage, HBMO models are a better alternative to A/B testing.
Dr. Michael Green
CEO, Co-Founder
Use the ComNav
It is for just this reason we built the ComNav, here are some ways it can be used.
More Blog Posts