By Peter Prestipino, Editor-In-Chief
The testing and eventual optimization of Web-related assets is the most sophisticated of the digital disciplines. Depending on the approach taken by an enterprise, the execution, creative effort and investment can be simple or complex, minute or massive. Getting started can seem overwhelming, but it doesn't need to be that way.
Aided by powerful technologies and numerous best practices established over time through trial and error, 'Net professionals can and should feel confident that it is absolutely possible to "improve" the performance of any digital presence through testing initiatives. While new functionality is brought forward into the market regularly (explore formulate a testing plan for your website.
Access Website Magazine's new "Big List of Testing & Optimization Tools" to identify offerings appropriate for the testing initiative of your enterprise (and its size and budget).
Dramatic improvements in conversion can be seen in every vertical; those in the media, ecommerce and service industries can expect increased engagement in whatever form including pageviews, revenue, registrations or even an increase in user generated content. There's no limit to the benefits of optimizing experience but if positive improvements are the aim, it's necessary to take the guesswork out of testing.
Running a test based on a gut feeling or "professional instinct" (or even what has worked for other companies) is, simply put, a bad practice. Fortunately, there is another, and far better, source of inspiration for this phase of the testing process: analytics.
There are several important insights Web professionals should look to acquire from their Web analytics data in order to identify optimization opportunities and "test-able" moments (e.g. the highest or lowest converting traffic sources, or the pages with the highest or lowest value, etc.); for example, data about what the key abandonment points are within the buying journey (which might be the checkout form for example). Combing through all this information, studying conversion rate, the top converting pages (along with exit and bounce rates) depending on source and other variables including device type, is an essential part of the optimization process. If it's not being measured, Web professionals won't be able to manage (or optimize) it.
Consumers expect a Web page to load in a matter of seconds and a significant number will abandon sessions altogether if their expectations aren't met - wreaking havoc on conversion rates. Discover the Web's top resources for monitoring load time, render time, down time, requests and page size, and leverage them to start testing the variables suspected of slowing down the digital experience.
After an enterprise has an idea of what visitors are doing and where they're going, the next step is to understand why. Analytics solutions, however, can only reveal there is a problem; usability testing can provide the answer as to why the problem may be happening. Usability tests essentially reveal points of friction in the funnel, where visitors become stuck, confused and frustrated. Some possible tests to consider include a browse flow (which reveals how visitors chose to start, like site search or navigation, if product pages and their elements are appealing, and if calls-to-action are clear and compelling enough to convert users from browsers to buyers) and a buy flow (which indicates how users move from product page through the checkout to the final order confirmation).
As computer scientist and U.S. Navy Rear Admiral Grace Hopper once said, "One accurate measurement is worth more than a thousand expert opinions."
With guesswork on the enterprise sidelines, it is time to concentrate on testing ideas for the digital experience.
The list of reasons website or application visitors aren't converting is long. Find out what elements - copy, calls-to-action, images - are causing friction and preventing users from following through at wsm.co/preventbuy.
After identifying the issues preventing a greater number of conversions from occurring or a deeper level of engagement from being achieved within analytics (the issues that warrant testing and demand optimization) it's time for the next phase of the process - selecting the variables that should be tested in order to optimize the digital experience.
The selection of variables for testing is an important phase of the testing process. While anything on a website can be tested, let the following serve as initial inspiration.
Calls-to-action (CTAs) indicate to users what they can expect so brands should be the ones actively aiming to drive the conversation. A variety of CTA elements can be tested including their color, size, contrast, as well as placement. For example, an ecommerce merchant might modify the wording of a "Buy Now" button to "Add to Cart," change its color from orange to yellow and move it higher into the field of a user's vision.
Colors mean something different to everyone depending on past experiences and context, making them one of the greatest testing variables. For example, orange will attract impulse shoppers, while navy blue will appeal to the budget conscious.
If users have made the decision to purchase, sellers must eliminate distractions and one of the best ways to do that is to reduce the amount of information gathered from users or to simplify its collection. The number of required fields is an ideal testing variable (e.g., shorter forms, experts argue, yield more conversions). For example, what impact does removing the phone number or zip code field from a form have on conversion rate? What about separating the personal data and postal data collection into a multi-step checkout? Keep in mind, not all tests show positive results, and there may be a negative impact to modifying forms, or any test that prevents or limits the collection of data that can be used in the future.
Testing the inclusion (or exclusion) of social buttons/icons is another popular test. While the presence of social proof can encourage users to recommend and provide some validation of brand integrity for a user, they can also have a negative impact as they can distract users. Testimonials are an additional form of social proof that can be tested through inclusion/exclusion (as well as the depth and breadth of those elements). Test the placement of these testimonials, the quantity featured and the elements they contain (including the positive/negative nature of quotes or the presence of star ratings within testimonials).
Providing Web visitors an opportunity to search through content or products has been proven time and again as a way to increase engagement and draw visitors deeper into the conversion funnel. The execution of site search, however, is often woefully inadequate and lacking in quality. Testing the placement and size of search boxes, the search prompts that are used, as well as the type of actual results returned and the information shown could provide a very positive improvement to a number of key performance indicators.
Should bounce rate or abandonment rate be exceedingly high, it is essential to intercept users if they reveal the intention to leave the experience (which can be achieved in a variety of ways). Test the presentation of exit offers in this scenario with personalized and specialized incentives (e.g., discounts, free gifts, vouchers) to determine the impact on conversion. Internet retailers, for example, could present shoppers with items that were recently viewed or recommended by others. It's rare that any end-user will purchase on their initial visit, and testing the presence of exit offers and how those incentives are executed is another test that's worthy of enterprise-wide attention. Many testing options abound obviously; it really comes down to making a decision about what needs to be tested and putting those tests into place. It's impossible to make improvements without doing so.
While sales/journey funnels will need to be set up appropriately to leverage price schemes for improved conversions, testing the presence of freemium versus free trials might be useful, or adjusting the length of the trial (perhaps even the actual pricing of different plans) can be as well. Learn more about using decoy pricing on your website.
There was a time when if 'Net professionals wanted to test how one element performed, doing so in a manual fashion was the only way to get it done (and with the help of developers). Thanks to some very powerful software solutions, however, it's easier than ever to test and optimize a website. When determining the ideal website testing solution for an enterprise, know there are several things to consider, including the pricing model (is it fixed-rate or usage based?), the characteristics of enterprises using the offering (size, budget), what's required for setup, the number of campaigns that can be run simultaneously (or over the course of a specific time period), and what's required for ongoing campaign creation and management.
Since we want testing to become a continuous, iterative process within our enterprise, this might be one of the most important decisions a Web professional makes. Not all solutions available on the market today are right for every enterprise, however, as each will have their own unique features and, of course, limitations.
Let's look at a few of the features most in demand by today's Web enterprises.
Multivariate testing: Multivariate testing uses several different values for multiple elements on one page to create countless combinations or versions of that page, each of which are exposed to a random segment of live traffic. An analyst can measure the impact each of those variables has on the Web page's conversion rate. Since they present many different versions of a page, multivariate tests require more time and traffic to achieve statistical significance. However, they allow marketers to test many elements of a Web page simultaneously. Most of the solutions on the market today offer some combination of A/B (split) testing and multivariate testing capabilities. Selecting the right solution at the outset will make scaling testing initiatives simpler in the long run.
Multi-page testing: Multi-page testing allows users to test an element that spans multiple pages of the website, while providing a consistent user experience for the site visitor. For example, if a company wants to test a design element of a multi-step checkout process, visitors who got the original variation on step one will continue to see the same variation through the rest of the checkout flow. This is a much more sophisticated feature (along with cross-domain testing features) and is not available through most of the more basic offerings.
A/B tests (as well as other forms of testing including those of the multivariate variety) can run for any length of time but ideally will do so for at least a full seven-day week (and typically no longer than a few months). When tests are run for too long, additional variables can influence the results (like seasonal changes or promotions). Access some guidance on the right length of time depending on the variable chosen and the traffic received.
Native heatmapping: Many A/B testers use in-page Web analytics, or the analysis of user interaction on one Web page, to identify areas of confusion or barriers to conversion on a Web page. This data helps testers come up with ideas for website changes that might improve conversion. Heatmapping is one type of in-page Web analysis, which shows mouse or click activity. Note that some tools don't offer this feature natively, but instead offer it through third-party integrations.
User roles and permissions: The availability of different roles and permissions for different users allows companies to limit the ability to perform certain tasks. For example, user A might be able to simply view reports, whereas user B can design experiments and user C can actually launch experiments. This is useful if companies don't want all testing software users to be able to make direct changes to the website with no oversight.
Role-based workflow and approval: Related to user roles and permissions, this type of functionality allows users to follow a particular workflow to build and launch experiments within the tool. For example, user A might design and set up an experiment, and then pass it off through a chain of superiors for approval before launching the test.
Adaptive Algorithms: In some testing software tools, the user defines the percent of Web traffic that should be allocated to each variation being tested. For example, a user might decide to send 75 percent of traffic to the original version, which has a known conversion rate, and 25 percent to the new treatment. Once the test has reached statistical significance and seen enough Web traffic, the winning version is released to all visitors. However, some tools can use an adaptive algorithm to adjust the division of Web traffic as test results come in. This allows a company to take advantage of the winning variation by sending more traffic to it, while still exploring the possibility that the lower performing variation might still win. This helps discourage users from ending tests too early and getting false positives, while still allowing them to take advantage of what could be the winning variation.
Automated, machine-learning or predictive capabilities: Predictive capabilities allow testing and targeting software to predict visitor behavior, based on previous actions and the behavior of other, similar website visitors, and tailor content accordingly. Predictive targeting requires some self- or machine-learning capabilities on the part of the tool, where a computer model ingests data from various sources and makes a best guess regarding the most effective content to present to each visitor.
Testing comes down to understanding the drivers of conversion, the inhibitors and doing something about it. While there's no guarantee that increasing the drivers and reducing the inhibitors will make any difference, in most cases that's exactly what happens. Dig into your conversion funnels, find out where users/visitors aren't converting, get creative with tests and make it an iterative process. Testing is never over.