When Not to Test

As an Internet professional, you've probably gotten your fill of testing advice. Almost every tip you come across likely includes the caveat "you should test to see what works best for your situation."

Between that advice and the proliferation of low-cost testing tools available, it's no wonder that testing has become the latest digital marketing craze.

It may come as a surprise then, that testing is not always the quickest or most effective path to higher conversions. Here are some situations in which testing can lead to inaccurate conclusions or very slow improvements.

When your traffic is unstable

Under ideal conditions, your landing page should be the only thing that changes during your test. Otherwise, it is difficult to attribute any outcome to specific variables being tested. And one of the biggest external factors muddying up the testing waters can be traffic instability.

In most cases, you will combine a number of traffic sources for your test. The assumption behind this approach is that the combined population of people visiting your website behaves consistently, and that by testing you can find an alternative landing page design to which they will respond more favorably. So, you have to be careful that the composition of your audience doesn't change dramatically during the test. Otherwise, you may optimize for one population and then try to apply the results to a different one.

There are so many possible events that could affect your traffic, such as (1) increasing or decreasing spending on particular campaigns; (2) significantly changing the mix of pay-per-click keywords and positions you are bidding on; (3) changes in search engine algorithms; or (4) launching new offline advertising channels or changing the message of offline advertising.

Some of these events you will be able to predict or control, and others you will not. Your goal is to run your test during a time period when you expect traffic to be very typical of your "normal" traffic patterns - both in terms of volume and source (what directed visitors to your site). If you can't prevent events that might bias your test, you will need to have some method of detecting anomalies and modifying your analysis to compensate for the effects. If there are significant changes to your traffic mix while you are running a test, you may have to restart your data collection since your conclusions will be highly suspect.

When you don't have enough conversions to get statistical significance

Testing is based on statistics and probability. Let's assume that you have a constant flow of visitors to your landing page from a steady and unchanging traffic source. You decide to test two versions of your page design, and split your traffic evenly and randomly between them. You have two landing page experiences, each with its own random variables (the visitors) and its own measurable events (the conversion, or failure to convert).

The true probability of conversion for each page is not known, but must be between zero and one. From the law of large numbers, you know that as you sample a very large number of visitors, the measured conversion rate will approach the true probability of conversion. From the Central Limit Theorem you also know that the chances of the actual value falling within three standard deviations of your observed mean are very high (99.7 percent), and that the width of the normal distribution will continue to narrow (depending only on the amount of data that you have collected). In short, you'll need to measure not only enough visits, but also enough conversions to feel certain that the conversion rate you have measured is an accurate predictor of the outcome you can expect over time.

Here's a simple translation: When you flip a coin you have two possible outcomes - heads or tails - each of which has an equal likelihood of coming up. So the theoretical ratio is 1:1. But suppose you flipped a coin four times and got three heads. You would inaccurately conclude that the ratio of outcomes was 3:1. The more times you flip the coin the more that ratio will level out, eventually coming extremely close to 1:1.

Now apply this thinking to your landing page test. Instead of heads and tails, your two possible outcomes are conversion and no conversion. If you have too few conversions it would be nearly impossible to observe enough experiments to see the actual ratio of outcomes converge on the theoretical ratio of outcomes. Without that, you just won't get accurate results.

You may be tempted to think that you could simply run the test long enough to get the numbers you need, dragging your test out over weeks or months. The hazard with this approach is it becomes increasingly difficult to avoid polluting your data with other variables, such as seasonal traffic spikes, changes in the competitive landscape or other external factors that could affect your conversion rate. The longer your test runs, the more polluted the data may become, which could easily lead you to inaccurate conclusions.

When your website is a hot mess

If you have serious problems with your website design, if your checkout flow is cumbersome, or your architecture is confusing and non-intuitive, testing won't do you much good. Testing works well for optimizing individual pages and elements, but without taking a more holistic approach to fixing your site, you run the risk of morphing it into an unusable Frankenstein- like mess. The different pieces won't mesh together well, causing visitors to feel confused and creating a chaotic, unpredictable user experience.

If you've been testing, changing things and retesting but aren't seeing results, consider stepping back from testing and re-evaluating your overall site through the eyes of your customer.

- Is your messaging and terminology consistent with what will resonate with your target audience?

- Does your website give users a clear path to accomplish their desired task?

- Does your design reflect professionalism and inspire trust?

- Do you have too many navigational choices, or confusing categories of information?

-Are your navigation labels intuitive and informative?

- Are your photography and other graphic images of high quality?

- Are your calls-to-action clear and appropriately placed?

- Are your fonts clear and easy to read, and is there a logical visual hierarchy that helps visitors understand the relative importance of different pieces of content?

- Is your copywriting developed specifically for an online audience, or did you simply recycle text from your written materials?

If your website is failing in these areas, tweaking one variable at a time with A/B testing will be a long, slow process. Your time and money would be better spent conducting usability tests and then redesigning your site and its information architecture based on that feedback combined with best practices in user-centered design and conversion optimization. Then, once your new site is launched you can begin the process of identifying individual sections or variables to test for further improvement.

There's no question that website testing is an important element of conversion rate optimization, but it isn't the only element. If your site has very low conversions it may be a result of significant fundamental flaws that should be corrected before testing is considered. Once you have a usable and well-designed site with a steady flow of traffic, then you can begin the ongoing process of testing, fine-tuning and continually optimizing for greater success.


About the Author: Tim Ash is the CEO of SiteTuners, Chair of Conversion Conference and the bestselling author of Landing Page Optimization.