Integrating Testing and Analytics for Maximum Impact

A very natural relationship exists between testing platforms and analytics applications. One is designed to drive improvements on your site and the other to help measure user behavior so that any changes or improvements can be quantified. While some testing applications do a good job of helping quantify success on their own, the integration of testing with general Web analytics solutions allows companies to develop a broad view of many combined individual tests.

On the subject of measurement, one point that is very important and frequently overlooked is that the combination of measurement and testing should support both optimization and an incremental learning process about your visitors and customers. Assuming you have a measurement program in place, the integration of testing those efforts simply requires some effort setting up information flows (i.e. through a Web analytics tool's existing custom variable capabilities). The upside from taking the time to integrate these systems correctly includes the ability to evaluate testing efforts across multiple user segments; taking into account offline or demographic dimensions not immediately available by observing website behavior.

A common disappointment among companies deploying testing and optimization technology stems from tests that do not yield the expected radical improvements. Sometimes even the most dramatic design changes fail to produce significant differences in click-through or conversion rates. Surprising, yes, but avoidable by defining what success or failure means to you. Success in testing can be measured many different ways, depending on your goals and your organization.

- For some, success is a dramatic increase in a revenuebased metric, knowing that most senior stakeholders will respond to incremental revenue.

- For others, success is a small increase in key visitor engagement metrics, knowing that a series of small gains eventually adds up.

- For still others, success is a reduction in the number of usability problems present throughout the site, knowing that solving these enables users to complete their tasks more efficiently.

- For some, especially those with an increasingly dated site, success is simply being able to deploy a new look without negatively affecting existing key performance indicators.

Here's where it gets interesting: Sometimes our customers think that, because they have not experienced what they consider to be success, they have failed. But that's not necessarily the case, because testing powers a continual learning process. For example, if a particular image fails to increase conversion rates, you have learned that your audience does not respond to that particular image or, perhaps the location of that image is simply not influential toward user behavior. In this context, there is no such thing as a failure in testing, only a failure to achieve the specific defined objective.

We know that not every test can yield huge increases in revenue for your business. Some tests will fail to produce the change desired; others will yield results but not for your key performance indicators; and still others will simply fail to produce statistically relevant differences. Even so, there are no failures in testing other than a failure to carefully design your tests and not carefully considering what you've learned.

How to integrate testing and analytics
While the specific measures you take will likely vary depending on the systems you have in place, the basic integration of testing and measurement systems involves exchanging data about test participation. This can be done either through an after-the-fact bulk batch loading process or through real-time tag transformation. The most fundamental and important data to pass is some kind of test campaign identifier - whatever value your testing application is using to keep track of the test(s) in which your visitors are actively participating.

During an integration you will need to ensure that data is flowing into the analytics application and can be used to create appropriate metrics and visitor segments necessary for deeper analysis. Simply having an ID to indicate participation in "any test" is not enough. You want to pass data that will allow identification of the visitor or session level for each test being run. Ideally, your analytics platform will allow you to load test metadata to increase the granularity against which analysis can be performed. If you're not able to get this level of detail, not to worry - you will still benefit from testing. But keep this practice in mind as you upgrade your measurement technology and always look for opportunities to dig deeper into your test results.

Done well, this type of integration allows the measurement team to create segments, build key performance indicators and drill down into the activity of individual visitors based on test participation (through the use of data warehousing and customer experience management technologies).The integration of measurement and testing is designed to help better quantify test performance and the resulting impact on the business. Integrated systems, when properly used, support a wide range of metrics and measures and support the analysis of both short- and long-term impacts of tests.

About the Author: Eric J. Hansen is the founder and CEO of SiteSpect, a multivariate testing and behavioral targeting platform for websites and the mobile Web. Eric is a frequent speaker at conferences covering Web analytics and optimization, and writes regularly on topics dealing with the intersection of marketing and technology. This article is based on research conducted by Eric Peterson of Web Analytics Demystified.