Why A/B Testing Fails for SMBs—And What to Do Instead

updated on 12 March 2025

TL;DR

If you're a small or medium-sized business struggling to optimize product conversion through A/B testing, you're not alone. In fact, A/B testing tends to work best for large companies, while most other businesses lack the necessary traffic, expertise, and scale to achieve a positive ROI. This article explores the main barriers SMBs face with A/B testing, alternative strategies they can use, and how AI is changing the game for them.

Key Obstacles SMBs Face with A/B Testing

We have interviewed dozens of SMBs to understand their challenges with A/B testing and have run over 100 A/B tests in our own careers at companies of all sizes, from startups to big tech. From this, we’ve identified four main obstacles that SMBs face when running A/B tests with in-house teams:

  1. They don’t have enough traffic.
  2. They lack the required expertise.
  3. They can’t invest enough time
  4. Considering all factors, the return on investment is likely negative.
Barriers to A/B-testing
Barriers to A/B-testing

Problem #1: Not Enough Traffic

The single most important barrier to A/B testing is a lack of traffic or sample size. A/B testing requires testing a sample to determine a statistically significant difference in performance between two or more variants. The required sample size for a significance test is determined by:

  1. The effect size. How big is the difference in performance between the variants? The smaller the difference, the larger the required sample. Therefore, micro-optimization of conversion rates requires a much larger sample than testing major changes.
  2. The baseline conversion rate. What is the current conversion rate of the control variant? The fewer users convert, the bigger the required sample to determine statistically significant differences in conversion rates between test groups.
  3. The number of variants tested. The required sample size scales 1:1 with the number of variants being tested. That’s one of the reasons why, in practice, multivariate tests are rarely feasible.
  4. The required significance level. How large should the error margin be? The de facto industry standard for significance level is 95%. (At Meta, we used 99% for testing feature changes, but that’s a different story given the scale of the business.)

Now, let’s take the example of an SMB eCommerce store with 400k monthly website visitors and a 1.5% website visit-to-purchase conversion rate. They want to test two versions of their website. The smallest effect size they want to detect is a 5% conversion rate uplift. This test requires a minimum sample size of 830k website visitors (use a free calculator to try different variables).

This test would need to run for about two months to collect a sufficient sample size. Now the question is: Does it make sense to let a test run for that long? The clear answer is no. That’s because:

  1. Iterations are too slow. According to research published in HBR, 80–90% of A/B tests fail, meaning they don’t deliver a statistically significant result. Teams need to iterate quickly to learn from their tests and ultimately find “winners.” Running a series of tests is necessary, but such an iteration process isn’t feasible if each test runs for two months.
  2. Tracking is unreliable. A/B testing requires tracking users to ensure they see the same variant consistently and that their conversions are recorded. Website tracking typically relies on cookies, which aren’t fully persistent (users delete them, switch devices, etc.). Since ~30% of cookie data is lost within a month, tests lasting longer than a month become unreliable. (Product tests with persistent user IDs are a different story.)

In sum, SMBs typically don’t have the required traffic to run A/B tests within a reasonable timeframe. This is especially true for incremental conversion rate optimization, which aims to find many small uplifts (read more in our blog post on testing algorithms).

Problem #2: Lack of Expertise

A second key barrier to A/B testing is a lack of experience and expertise. That’s because the success of each test ultimately depends on the quality of the hypotheses being tested.

A/B testing involves some technical complexities. However, these can be overcome relatively easily, as the required knowledge is available to anyone interested in learning about it. Hence, this is not the type of expertise that is difficult to build or acquire. The real limitation lies in experience with formulating hypotheses specific to the conversion problem at hand—e.g., creating a UI that minimizes drop-off on the sign-up step, nudging users to add items to a shopping cart, or designing pricing tables that help users choose a subscription plan.

Typically, defining the hypothesis is done by a product manager (PM) whose team “owns” the codebase for the particular part of the user journey. In big tech companies this ownership is highly departmentalized. For example, one PM owns the registration and login flow, while another owns the onboarding experience. These PMs are experts in their specific domains, having built a deep understanding of the respective problem space through data analysis, user interviews, and extensive A/B testing. Each test further expands this knowledge.

However, building this kind of knowledge internally is much harder for smaller organization. First, they can’t compete with big tech companies when it comes to hiring and retaining the best PMs with expertise in growth optimization. Second, they can’t afford to assign one PM (and a tech team) solely to growth optimization. Often, it’s the business owner or a technical marketer who creates A/B tests.

As a result of lacking specialization and competing priorities, such teams fail to develop the expertise needed to formulate high quality hypotheses that yield statistically significant outcomes. Instead, testing neither delivers the expected impact on conversion nor builds incremental knowledge.

Problem #3: Lack of Time

A/B testing is a manual and time-consuming process, considering all the steps involved:

  • Analyzing data to craft a hypothesis
  • Designing the variants
  • Implementing the variants in the codebase (+ QA testing)
  • Setting up the split test in a testing tool (defining targeting, target metrics, test duration)
  • Monitoring test assignment
  • Analyzing results and defining the next iteration

This is particularly challenging for small companies without a dedicated team for A/B testing. For example, consider a very successful Shopify store generating a few million in annual revenue with a D2C product and a small team (five people or fewer). For such organizations, it’s extremely hard to carve out time for A/B testing, as there are always more pressing core business activities to take care of, such as procurement, shipping, handling customer complaints, and developing the brand or product they sell.

Problem #4: Negative ROI

Considering all the points mentioned above, for most small and medium-sized businesses, the expected rewards of A/B testing often do not outweigh the cost. In other words, they’d be better off not running any A/B tests with internal resources.

But what are the costs of A/B testing? Let’s take the example of an SMB SaaS company with €10M ARR and a dedicated team for running experiments. In Western Europe, such a team costs at least €500k per year (PM, 2-3 developers, designer, analyst). An A/B testing tool costs this company about €100k per year.

Now, let’s assume this team is not entirely new to the task and has already acquired some domain expertise. On average, they deliver a 5% incremental uplift in conversion through A/B testing each year. This team barely pays back its cost (€500k in incremental revenue vs. €600k in team and tooling costs). Considering that newly formed teams rarely deliver any incremental uplift in the first 12-18 months, it’s an investment only high-growth companies with deep pockets should make.

Which Options Do SMBs Have?

Given all these constraints, what options do SMBs have for data-driven product and conversion rate optimization? Simply put, there are three paths forward:

  • Not running any A/B tests: Focus on building core product value and delay A/B testing until the company has grown beyond a certain size—when it has more traffic and can afford a team to run experiments. This shouldn’t be seen as a “non-option,” since the more product value you build now, the more you can capture later.
  • Working with an agency: Partnering with an agency specialized on A/B-testing has the advantage that these businesses bring experience from optimizing many products. They can quickly identify the most critical areas for optimization and formulate high-quality hypotheses that maximize the expected impact or learning per test. Working with an agency also solves the ROI dilemma, as it eliminates the need to hire a dedicated product team.
  • Using AI & ML optimization: A new category of AI-powered optimization tools, like Levered, leave the traditional path of AB-testing and instead use machine learning algorithms to optimize products. This radically reduces the cost, time, expertise and traffic needed for data-driven product optimization.

The future of optimization: How AI is changing the game

While classic A/B testing has remained largely unchanged for decades, machine learning and AI are now transforming product optimization at an unprecedented pace. This is especially good news for small and medium-sized businesses, as it makes continuous product growth optimization accessible to companies that can’t (or shouldn’t) rely on traditional A/B tests for the reasons mentioned above.

The main advantages of AI-powered optimization over classic A/B testing lie in its data efficiency and the high degree of automation that can be achieved:

ML algorithms require significantly fewer user interactions to predict which designs will convert best. This is because the statistical “regime” behind machine learning is far more efficient at computing probabilities and updating predictions based on the latest observations. This facilitates massive multivariate optimization and personalization of user experiences.

Additionally, AI dramatically reduces the cost per test by automating nearly all the manual steps involved in running experiments—from identifying what to test to designing changes and implementing them in the codebase.

As a result, in the coming years, we can expect a wave of AI-powered product optimization tools that will make a large part of A/B testing obsolete. These innovations will empower SMBs to improve their products, lower customer acquisition costs, and reinvest those savings into building products their customers love.

Built on Unicorn Platform