THE 3-MINUTE PERFORMANCE CHECKUP - PART 2

 

Understanding the difference between Google Page Speed data, Real User Monitoring (RUM) data, and Synthetic data for analyzing your webpage performance.

In part 2 of this series, we will explore the similarities and differences between web performance measurement across technical and business teams and the best strategies for aligning both teams around web performance metrics.

Talking about web performance is a funny thing - everyone agrees that it is important, but almost everyone has a different way of talking about it, measuring it, and interpreting it. The goal of this post is not to point fingers or a declare a victor, but rather to show that even within the same web team, conversations about web performance can often be an apples to oranges comparison. Because web performance is a very important contributor to a business's success, it is very important to understand the differences between how two teams might come to different conclusions about the performance of the same website. What we want to avoid is teams disagreeing on how their website is performing or how successful a project was because there is a disagreement about the underlying performance data.

There are really two distinct groups in an organization when it comes to measuring and consuming web performance data - loosely speaking, they are the “Marketers” and the “Engineers” or the business and the technical teams.

Engineers = IT/ Engineering/ DevOps: these teams are typically looking at web performance through application performance monitoring (APM) tools like Catchpoint, New Relic, Pingdom, Blue Triangle, etc.

Marketers = Marketing/ Product/ Digital Operations: these teams are often looking at web analytics tools like Google Analytics, Adobe Analytics, IBM Digital Analytics, Amplitude, etc.

The main difference between these two approaches is how the data is collected and where the analysis starts - bottom up vs. top down.

Engineering starts from the bottom, using very specific metrics like Time to First Byte, onLoad, and domInteractive. Essentially, they start with all of the very specific things the browser is being asked to do and follow each metric as the page builds out.

On the other end of the spectrum, Marketing starts from the top, with metrics like conversion rate, page views and eventually, page load time. They start with the end goal of what the user is trying to do and then work backward to see what influences that.

As an example, let’s compare how we would evaluate ‘how fast is a page loading?’ in Google Analytics and Catchpoint, respectively. Before we do that though, it is important to know how each tool collects performance data.

In Google Analytics, performance data is collected for users visiting your site in real time via the HTML5 Navigation Timing interface, supported in most common browsers. This is typically referred to as “Real User Monitoring” data, or RUM. However, only 1% of your users will have this information collected by default, as Google sets the standard sample rate at 1%. You can change this percentage, but out of the box Google Analytics will only collect performance data from that 1% of your users. Google also combines all of the granular browser navigation timings into a single metric called Page Load time and then reports on the average of that metric for each page.

In Catchpoint, and most dedicated performance monitoring tools, there are two ways they collect data - Synthetic testing and Real User Monitoring. Synthetic testing simulates real traffic by emulating what happens in a browser through scripts and measures the results. Synthetic testing can simulate different network conditions, device types, locations, and other characteristics and will report back on all of the navigation timings. Synthetic testing does not need any real human traffic to be on your page for it to work. Real User Monitoring is much more similar to the way that Google Analytics collects data, in that it collects Navigation timings and HAR data from each page load as it happens in real time, for the users who are being sampled, but the data is retained in a more fine-grained form. This sample rate is highly dependent on the tool and any customization done by the end user.

In Google Analytics - the metric we will look at is ‘Average Page Load Time’ -

This is average the time, in seconds, it takes the page to load. It begins when the navigation begins (ie. clicking on a link) and ends when the webpage has completed loading in the user’s browser.

In summary - for a single URL, Google will report on the average combination of all of the navigation timings from 1% of your users and provide the average as ‘Average page load time’.

In Catchpoint we have many more possible answers. Catchpoint and other dedicated Performance Monitoring tools report on all of the navigation timings from both synthetic testing and from real user monitoring. Unlike Google Analytics, which points you in a direction for ‘Average Page Load Time’, Catchpoint and other performance monitoring tools provide all of the raw data and allow technical teams to sift through to find any page speed or navigation timing data point they are looking for. The preferred way of showing ‘how fast is a webpage?’ doesn't really have an easy, standard metric like Google Analytics provides.

From Catchpoint:

“Various attempts have been made for measuring user perception of performance, some of which include:

First paint is reported by the browser and tells you when the page starts changing. It doesn’t indicate completeness, though, and sometimes measures when nothing visible is painted.

Render start is a synthetic test measurement that detects when the page first changes from blank to displaying visible content.

DOM interactive indicates when the browser finishes building the DOM and can be used to approximate when the user is able to interact.

Speed index measures the average time for pixels on the visible screen to reach a “complete” state. It can approximate user perception, but since it doesn’t account for which content is important to a user, the average can be skewed by incomplete content that isn’t relevant to the user experience. It also currently does not work well for indicating perceived performance of soft-navigations in single page applications and can’t easily be explained in laymen’s terms, making it difficult to understand for many people."

So we have one tool reporting on the average of all of the network timings across 1% of your users, and another tool letting you choose from any one of the individual network timings across synthetic tests or from real users. Which one is correct? Let us be clear that both are - there is no right answer here. There is value in both Marketing and Engineering analytics tools and we are not claiming one is “better” than the other. They key point is that the questions you can and should ask in these tools are different, and each is suited to answering a different set of questions. Marketing Analytics tools are not great at understanding if your images or CSS files are too large and are slowing down web performance, and Engineering analytics tools are not great at finding the correlation between performance and conversion rates.

So what does this mean for organizations and teams working on projects that impact web performance? The simplest answer is just to agree on a measurement framework and metric ahead of time, and have that communicated consistently across both the technical and business teams. The goal is to avoid both teams coming together at the end of the project and disagreeing about what the data shows. Business teams should understand how the technical teams measure and monitor web performance, and that it doesn’t necessarily align directly with what they see in their analytics tools. Technical teams should likewise understand that business teams are looking for links between web performance and business metrics, like conversions or revenue, but that trying to directly connect the dots between Time to First Byte and Revenue is a very arduous task.

If you have followed the steps in part 1 of this post and are ready to kick off a performance project, start by reaching out to the other side (marketing go talk to engineering and vice versa) and ask them about how they think about web performance. The results might surprise you. From there, agree on a specific metric you are trying to improve, how you will measure that metric, what tool will report on that metric, and then communicate that across your entire organization. Taking these steps up front will make it more likely that your project will go smoothly and succeed because everyone will be aligned in advance on goals and measuring success.