Content Marketing Magazine | News, Tips, Insights & Thought Leadership

Why Your A/B Testing Isn't Working As Well As You Think

Written by Isaac Rothstein | Aug 1, 2014 5:00:00 AM

There's nothing groundbreaking about A/B testing. Direct marketing departments have been using it for years. Before the World Wide Web, it was via infomercials and catalog mailers. Since the Internet revolution, it been used for website improvement. Giants like Amazon and Google use it. Even the Obama presidential campaign used it.

A/B testing simply means that while a certain amount of people continue to use product A, a percentage of people are requested (or are anonymously required to) use product B. Results between the two are compared to see which fares best. To some, A/B testing is more of a philosophy, than just a way of testing.

Philosophy or simple testing tool, A/B testing continues to find enthusiasts across the digital marketing world. It has changed the way business is performed. But when does A/B testing stop being effective?

The black and white approach of A/B testing can cause problems. What happens when you can only run a small number of tests? A/B testing is most useful on sizeable websites that receive hundreds of thousands of hits on a daily basis. How about when only a few offers can be tested? The variances recorded in such tests are so low they have little significance, plus the results will not identify the variables that cause specific customer responses.

Because of this, the response rates for direct marketing campaigning methods such as emails or direct mail-outs are low (usually less than 1 in 20, and sometimes even as low as one in 200), and they're getting worse.

In such cases, A/B testing has huge limitations, but another, more effective method is possible. New advances in statistical analysis have allowed marketers to use a more dynamic technique achieved via experimental design. This new method works best with businesses who market directly to a massive audience - financial institutions, online retailers and telecoms, for example.

Experimental design allows for massive increases in the amount of variance for direct marketing campaigns. This will allow companies to calculate the impact of a lot of variations (offers, incentives, mail formats, etc.) but just test a handful of them.

How is this possible?

The formulas used utilize different combination of variable information as proxies to substitute for the complexity of the variables that were used in the first place. This means companies can quickly make adjustments and, by responding at speed, improve the overall effectiveness of their campaign as well as the economics. Marketing campaigns that are based on the experimental design model can increase consumer responses by between 300 and 800 percent. This can add billions of dollars to the upper and lower limits.

As an example, one telecoms firm mailed to several million customers every three months, but found that both conversion and response rates were falling. They then tested 18 different promotions, messages and formats, then launched twice as many offers to the target segment. Once the campaign has completed, the firm modeled response rates for each possible combination, which came to nearly 600, including a few that hadn't been launched in the marketplace. The top offers out-scored the response rates of the existing best offer by between 300 and 400 percent.

More significantly, the telecoms firm learned which variables caused their customers to respond the most, and their testing revealed some unexpected results. The firm had expected that the most expensive offers would create the highest response rates. As it turned out, the more expensive offers performed worse than offers that were not as beneficial to the customer. The format of the mail, the content of the message and the period of the promotion were the key factors that drove the best response and conversion rates.

In the end, the campaign resulted in the conversion of a much greater proportion of consumers to the top lines, increasing ARPU (average revenue per user) by 20 percent. It is unlikely that this would have happened with A/B testing.

Naturally, a company does not become more effective with experimental design alone - it needs to be run in conjunction with improvements in other areas of the business:

- Capacity. Successful experimental design will mean that businesses will have to develop skills to be able to work out customer segments that are based on behavior and needs. In the example given above, one of the segments consisted of family groups who wanted an 'in any room' service. By targeting this segment with information about the way this could be achieved, response rates and conversion rates improved. A different sector (young households) was not so easy to impress. They wanted easy-to-understand technology and cheaper prices. By using this kind of insight as opposed to simple demographics such as location and incomes, it allowed the firm to develop messages that were far more relevant.

- Talent. Making certain that insights resulting from efficient multi-variate testing is used appropriately in the next stage of campaigns typically needs a business to develop new processes and acquire new abilities. People in customer-facing roles many need extra training in order for them to deal more appropriately with customer inquiries and responses, or on how to persuade customers to consider products and services that create more value for the business.

- Decisions. By using monetary modeling, businesses need to put in place budgetary thresholds, such as targets for profitability. These will serve as guidelines for subsequent campaigns. Such budgetary thresholds will help accelerate the decision-making process and create an experimental model that is efficient and repeatable.

The Internet has evolved. Social networks and mobile devices mean that companies have more channels of communication open to them than ever before. This creates better direct marketing opportunities - but only if businesses are able to work out the attributes of a campaign that do actually influence the behavior of its customers the most.

A/B is great for testing black and white scenarios. Experimental design allows for a much broader spectrum of matching the right product offer to the right customer.

This is a guest post by Isaac Rothstein. Isaac is an analyst at Infinite Conversions, a digital focusing on improving website's real-world financial metrics through conversion rate optimization.