by Imad Mouline, Gomez Web Performance Monitoring
Your new website is ready to launch. It looks great and has several
new media-rich features designed to engage your audience. But before
you deploy, can you be certain the site will look good and perform
consistently across multiple browsers? Is this even a question you’re
Microsoft’s Internet Explorer (IE) owned virtually the entire browser market until just a few years ago. Back then, developers could get by testing and optimizing only for that browser. But since then, Mozilla’s Firefox browser has secured 18 percent market share and Apple’s Safari browser 6 percent, according to Net Applications. Initial reports indicate that Google’s new Chrome browser nabbed one percent market share almost overnight, and some predict it will eventually garner 20 percent. In addition, there are multiple versions of each browser in use: IE6, IE7, the beta IE8, Firefox 2 and 3, etc. All together, this underscores a fact of life for website designers, developers and your entire IT department: Web pages can look and perform differently from one browser to another.
This issue goes beyond simply how a page renders, however. The new generation of browsers heralds a major change in the way browsers operate. This represents a tremendous opportunity to advance the state of the Web, and both developers and end users stand to benefit in the long term. But the magnitude of the change also implies substantial risks for the unprepared. Not staying ahead of these changes will mean lost customers and lost traffic.
Diversity and the New Browsers
Some industry pundits say the browser will become the operating system (OS) of the future — eclipsing Windows, Linux and others. Whether or not that unfolds, this much is clear: more of our data is moving to the “cloud,” Web 2.0 continues to spawn new, user-centric apps and media-rich services, and the browser is being asked to handle more of the processing load than ever before.
Chrome and the latest versions of IE, Firefox and Safari were all designed as platforms to handle this expanding generation of Web apps. The “war” is to see which vendor will enable and empower developers to build the most exciting, most frequently used apps. This is a positive step for the future of the Web experience. So is the vendors’ greater commitment to standards compliancy, as we saw with Microsoft’s decision to have IE8’s default mode of operation be the standard-compliant mode (instead of the backward-compliant IE7 mode). As a developer, you’re probably excited about these new browsers. Yet you may still need to support IE6 because most of your traffic still uses that version. At the same time, Firefox and Safari are increasing in use by consumers. So what choices do you make to support all these browsers? Until a vast majority of your visitors migrate to the newer browsers, there will be short-term development challenges. This process will take years to unfold. The result: Web performance chaos, or what Tier 1 Research calls “an unfortunate turn of events for application providers.”
Let’s take a look at four key differences in the way these new browsers operate to get a sense of their long-term promise and near term pitfalls:
Parallel Connections: Older browsers like IE6 and IE7 were designed to make two host connections at a time (e.g., two images load at once). The new IE8, Firefox, Safari, and Google’s Chrome triple the number of parallel connections per host to speed the browser experience. (The maximum number of concurrent connections is limited only by the host itself.) That’s fast, but not all the time. An example: Gomez ran tests against a website serving up content from three hosts behind a firewall. Some tests used IE8’s connection profile with 18 parallel connections; others used the IE6 connection profile with six parallel connections. While the 18 connection mode was faster at times, the average response time over hundreds of tests was actually slower. Why? Because the host server couldn’t keep up with the dramatic increase in connection parallelism. Sometimes there can be too much of a good thing.
Now imagine your host server handling peak traffic with triple the number of connection hits. What used to be 100,000 hits are now 300,000. Is your server ready for this? And what about those nifty DNS tweaks you did to increase parallel connections under the old browsers? Those short-term workarounds will have to be reconciled with the new browser connection schemes.
Changes at the Presentation Layer: Developers certainly remember the transition to IE7 where certain CSS hacks ceased to work. Microsoft was very proactive about preparing the developer community and issued dire warnings about the consequences of failing to test your site in the new browser. The transition to Internet Explorer 8, with its emphasis on Web standards, carries similar risks and opportunities.
The meta-trend of network processing moving to the edges of the cloud empowers developers to build increasingly richer applications. So it’s possible for a poorly written application to have a significantly negative impact on the end-user’s machine through CPU and memory utilization. To make matters worse, tools like Chrome’s Task Manager make it easy for users to see exactly which application is putting the most strain on the browser. Your users may not be quick to forgive if your application slows their systems.
Characteristics of a Quality End User Web Experience
Given these major changes, ensuring that your website looks good and performs well in a multiple-browser environment begins with some form of measurement and testing. The overall goal is to ensure a quality, consistent Web experience for the end-user.
Consider viewing the process of cross-browser testing as falling into three main categories:
The Visual — Does Your Website Look “Right?” Perhaps the easiest measure is to simply take a look. Do graphics look the same across all browsers? How is the text rendering? Are Web apps showing up in the right places? Are there any missing function buttons? Visual verification is a straight-forward process, but with five or six major browser vendors and multiple versions of each, a tedious one.
The Functional — Do Website Functions Work Correctly? Do your critical business functions work successfully across all browsers and OS combinations? Or will you leave someone with a full shopping cart waiting or unable to check out?
Performance — Overall, How Does Your Site Perform? Three benchmarks assess end-user Web application performance. Each measures a specific aspect of the customer experience:
• Availability: shows that a Web page or full end-to-end transaction requested by a user is executed successfully without error.
• Response Time: shows the speed at which every end-to-end transaction, page, image or third-party content downloads.
• Consistency: shows the site’s ability to achieve a quality customer experience across multiple visits, regardless of the user’s geographic location.
As we’ve seen, consistency scores can be especially impacted by connection parallelism. More connections might mean better page load times when the site is under a light load, but much worse performance when under a heavy load. Some end-users will hesitate to return to a site if they experience inconsistent load times.
Perceived Performance: You are always optimizing performance for the user’s actual experience. Unlike response time, which typically measures how long it takes for a page and its components to fully load, perceived performance measures how long it takes for the page to appear to load. That means how long it took for the page to stop moving, and how long it took for all visible, above-the-fold components to load.
The perceived performance metric essentially captures the time it takes for your application to be available for user interaction. This is usually impacted by the size of the end-user’s browser window and how the page is designed. Is the layout such that below-the-fold components load before the visible ones? In either case, the load time of the page will be essentially the same, but the perceived performance will be very different. As you can imagine, a fast perceived load time is important to the end-user.
In the future, when the entire Internet is better optimized for these new browsers, we’ll all have a better Web experience. Pages will load faster, sites will be better-looking and apps will be easier to develop across all platforms. The future is bright. But until the browser wars are settled, getting there will hold many challenges. Are you ready?
Identifying and Measuring: Where to Start
Using tools and approaches based on the parameters outlined in this article, here’s how to get started:
Know Your Customers’ Browser Preferences: As you’re being forced to test across multiple browsers, it is important to know which browsers are most utilized by your customers. That way you can prioritize resources by first optimizing your site for those specific browsers.
Test Beyond Your Firewall: The optimal mindset for success in Web application performance testing is understanding that a Web application isn’t what the developer builds; it is what the end-user sees. So moving your testing beyond your firewall and into the end-user’s browser is the key. You must know exactly what your end-users are experiencing across multiple browsers.
Take the Lifecycle Approach to Performance Testing: Browser wars impact everyone in the Web app lifecycle, from the designer to the developer, to the QA team to IT ops, to marketing and creative departments. Ongoing testing in all areas will ensure optimal Web experience for your customers.
Benchmark Against the Very Best: It is also important to benchmark your site not only against your direct competitors, but against the best websites, period. That’s how your customers will judge. If Amazon and Blockbuster are their standards for website speed and consistency, you have to make them your points of reference too.
Imad Mouline is the Chief Technology Officer of Gomez, Inc. Gomez is the leading provider of on-demand Web application experience management solutions. You can reach Imad at firstname.lastname@example.org.