Skip to Main Content

Landing Page Test Hygiene: 5 Musts to Get Valid Results

Posted on 4.03.2017
There are countless marketing references telling people to “test everything.” There are even some places that tell people to test everything “now.” That’s not sound advice.

Not only should you not test everything; you should be very deliberate about testing the things that matter, and keeping the tests pristine so the results you end up with are valid.

If you’re new to testing, that last bit may not be as easy as you think – it’s not as simple as “don’t make other changes while the test is running.” (Although, yes, you should not do that.)

1. Ensure you have enough stable traffic

There are a few things about traffic that can mess up tests.

+ Raw volume. Tests require significant amounts of traffic and conversions to work, so marketers should ensure the page they’re testing has enough of both to get started. Marketers will usually want a page with hundreds to thousands of visits, or an absolute minimum of 10 conversions, per day.

+ Seasonal spikes and dips. Tests conducted during Cyber Monday or the Christmas season are likely to produce traffic spikes, which can result invalidate tests. Optimization professionals will want to try and plan tests around known spikes and dips by consulting historical traffic from their analytics solution.

+ Shifts on traffic mix. If a business is testing a landing page and suddenly brings in more traffic from one particular source (e.g., paid advertising via Ad-Words), the traffic mix will change in a way that can affect test validity. Marketers want their tests to be valid (meaning the test measures the goal well) and reliable (meaning the assessment tool produces stable and consistent results). For brands’ tests to have both of those things, they need a steady stream of traffic that’s not likely to change during the period of the test.

2. Plan for how long the tests should run ahead of time

There are a few things that should impact how long tests should run: 

+ The data rate (number of conversions per day)

+ The size of improvements (percentage improvement)

+ Size of the test (number of alternative designs)

+ Days of the week (tests should run in full-week increments, so marketers don’t contaminate the data set)

+ The statistical significance the marketer wants to achieve (how sure they need to be, usually set at 90-95 percent) Marketers should have all those ready when planning for and explaining their tests.

3. Know what kind of test you need for maximum impact

When people think of tests that change the elements of a page over a period of time, they usually think of split tests. Split tests are certainly very useful, but they are not the only approach as there’s a lesser known-alternative: multivariate tests. It pays to understand the differences between the two, and understand when to use them.

Split tests.


A/B tests are not very complex, since all marketers really have are ‘champion’ and ‘challenger’ pages. They run both at the same time, and see how they perform against the business goal. That’s dead simple. Split tests are easier to design, implement, analyze and explain. If marketers want to test a lot of variables, however, it can take a lot of time to test all of the combinations they want to go through.

Multivariate tests.


If brands have multiple elements they want tested and a lot of traffic, multivariate tests are far more efficient at testing the optimal combinations of elements on a page. In a test of this nature, multiple combinations of elements will be applied to the page, and marketers will know which specific combinations produced the best results relative to their goal. Brands will deal with more complexity in test design and analysis, though, and they’ll never really get why a certain combination worked best, only that the combination of elements outperformed the rest.

Marketers need to have an honest evaluation of where they are as a company, both from a data-rate standpoint, and at a comfort-with-complexity standpoint. If businesses are just starting to get their feet wet, split tests will usually be the best bet.

4. For split tests, start with A/A tests

Some tests die at the technology setup phase of the project.

If marketers are setting up A/B tests, it often pays to start with a technology checkup before running the first “true” test. With split tests, that’s called A/A testing.

+ Start with a challenger page that is exactly the same as the champion page + Run the test at a high confidence level (90-95 percent)

+ Let the test run, and check to see if the conversion rates are about the same

If brands have a good tool, 9 out of 10 times in this scenario, they will not have statistically valid differences between the champion and challenger page. If testers get that, this means their technology is good to go; there’s very little bias in the tool, and the results are more likely to be reliable. 

:: The State of Third-Party Apps on Sites ::

A look at the number of third-party apps being used and their impact on conversion and performance at

If the same pages produce different conversion rates, it’s time to check the tool setup or shop around for a better split testing solution.

5. Know your goals

Marketers can have relatively simple goals in mind for tests, especially if they have just one call-to-action (CTA) on a page, and the success measure is how many times the CTA gets clicked.

Sometimes, though, marketers will want to test something more complex. If they have multiple offers on a page, for example, it’s very possible for pages with lower conversion rates to outperform pages with higher conversion rates when it comes to profit. Brands need to plan for those things in advance:

+ If a company has just one CTA, or multiple CTAs that have the same value, its goal should be to maximize the conversion rate

+ If a company has multiple conversions with variable values, it should normalize by using a goal like average revenue per visitor

Conversion rates are great, but it’s not always the best metric to consider.

Putting It All Together

When a company knows what kind of tests they need, check the testing tool before starting to run tests, control for traffic sources while tests are running, and look at the right metrics, they are more likely to get the most value and performance from their testing initiatives.

About the Author

Martin Greif brings 25-plus years of sales and marketing experience to SiteTuners where he is responsible for driving revenue growth, establishing and nurturing partner relationships and creating value for its broad customer base.
Website Magazine Logo
Today's Top Picks for Our Readers:
Recommended by Recommended by NetLine

Leave Your Comment

Login to Comment

Become a Member

Not already a part of our community?
Sign up to participate in the discussion. It's free and quick.

Sign Up


Leave a comment
    Load more comments
    New code