What should you measure to improve performance?

The different steps of a purchase funnel are prone to performance issues in
different ways, and therefore need different measurement and optimizations:

A conversion funnel.

In this guide we’ll address how the different steps can be measu…


This content originally appeared on web.dev and was authored by Martin Schierle

The different steps of a purchase funnel are prone to performance issues in different ways, and therefore need different measurement and optimizations:

A conversion funnel going from discover to engage to convert to re-engage.
A conversion funnel.

In this guide we'll address how the different steps can be measured. For this we recommend you to look at lab as well as field data.

Lab data is gathered by running tests locally, for example by using Lighthouse and other tools. This can make it possible to compare website performance over time and with competitors through a controlled, stable environment, but it might not be representative of the performance users experience in real life.

Field data is gathered via analytics from real users, and is therefore representative of their experience. However, it can't easily be compared over time or with competitors. Network connections and smartphone hardware evolve over time, and different target audiences might have different devices, thereby making comparisons with field data tough. See also Measure Performance In The Field.

To get a complete picture, both data sources are needed. The following sections show how to gather lab and field data for the different relevant performance marks across the funnel.

Discovery

Optimizing for discovery means optimizing for the first load, which is what new users will get, but also search and social crawlers. Lab data for a first load can be easily acquired through Lighthouse, while field data (at least for Chrome) is readily available through Chrome UX reports. A convenient combination of both can be found in PageSpeed Insights. You should also track relevant metrics from the field yourself: Measuring these metrics on real users' devices provides a good overview.

From a user perspective the most important metrics are:

  • First Contentful Paint (FCP): The time the user stares at a blank screen. This is when most users bounce, as they don't see progress.
  • First Meaningful Paint (FMP): When the user begins to see the main content they came for. This is often the hero image, but for a landing page it may even be a call to action such as a Buy button, since the user may have arrived with a clear intent (for example, through a targeted ad campaign).
  • First Input Delay (FID): The time the website needs to react to the user's first input. Excessive JavaScript and other asset loading problems can block this, leading to failed taps or clicks, erroneous inputs and page abandonment.

There are more metrics you can look at, but these are a good baseline. Additionally also make sure to capture bounce rates, conversions and user engagement so that you can set these in relation.

Engagement And Conversion

After the first load of a landing page, a user will proceed through your site, hopefully towards a successful conversion.

In this phase it is important to have fast and responsive navigations and interactions. Unfortunately it is not trivial to measure the complete flow of navigation and interaction events in the field, as every user takes different paths through the page. We therefore recommend measuring the time needed towards conversion or conversion subgoals ("Time-to-Action") by scripting and measuring the flow in a lab test, to compare performance over time or with competitors.

There are two convenient ways of doing this:

WebPageTest

WebPageTest offers a very flexible scripting solution. The basic idea is to:

  • Tell WebPageTest to navigate through the pages of the flow with the navigate command.
  • If needed script the clicking of buttons via clickAndWait commands and fill text fields via setValue. For testing of Single Page Applications use clickAndWait rather than navigate commands for all steps after the first, as navigate will do a full load instead of the lightweight virtual page load.
  • Make sure to combine the different steps of the flow in the analysis via combineSteps to produce a single overall result report for the complete flow.

Such a script could look like this:

combineSteps
navigate https://www.store.google.com/landingpage
navigate https://www.store.google.com/productpage
clickAndWait innerText=Buy Now
navigate https://www.store.google.com/basket
navigate https://www.store.google.com/checkout

With a script like this in place you can easily measure and compare performance over time. This can even be automated through the WebPageTest API.

Puppeteer

Another great option to script testing is via headless Chrome, which can be controlled through the Node API Puppeteer. The general idea is to start the browser through Puppeteer, navigate to the landing page through the goto function, inject Javascript to fill fields or click buttons and proceed through the funnel through further goto calls as needed.

As a metric the duration of the flow can be measured directly, but you could also sum up the FCP, FMP or TTI values of the individual loads of the flow. Test website performance with Puppeteer provides an overview of how to get performance metrics via Puppeteer. A very simplified example Node script could look like this:

const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
const start = performance.now();
await page.goto('https://www.store.google.com/landingpage');
await page.goto('https://www.store.google.com/productpage');
// click the buy button, which triggers overlay basket
await page.click('#buy_btn');
// wait until basket overlay is shown
await page.waitFor('#close_btn');
await page.goto('https://www.store.google.com/basket');
await page.goto('https://www.store.google.com/checkout');
console.log('Flow took ' + parseInt((performance.now() - start)/1000) + ' seconds');
await browser.close();
})();

This script can easily be automated, and even be made part of the build process or perf budgets, and be monitored regularly.

Re-Engagement

Users will return to your site in different time intervals. Depending on time passed, the browser may have less of the website's resources cached, needing more network requests. This makes it difficult to estimate performance differences across repeat visits in lab tests. It is still recommended to keep an eye on this though, and a great lab test tool for repeat visits is WebPageTest, which has a dedicated option for a direct repeat visit:

The WebPageTest homepage form for auditing a site. The repeat view option is highlighted.
Webpagetest offers options to test first load and repeat load as well

To get a better feeling for repeat visit performance in the field use your analytics package of choice to segment your performance metrics by user type. Here is an example of such a report in Google Analytics:

A Google Analytics dashboard shows a number of fields being added to a custom report.
A Google Analytics custom report can be used to report speed metrics for new and returning users.

A report like this will give you page load times for new and returning users as well.

Recap

This guide showed you how to measure first load, flow and repeat load via field and lab tests. Make sure to optimize the different steps of the funnel accordingly to maximize discovery (first load), engagement (navigations and flow) and re-engagement (repeat load).


This content originally appeared on web.dev and was authored by Martin Schierle


Print Share Comment Cite Upload Translate Updates
APA

Martin Schierle | Sciencx (2019-05-31T00:00:00+00:00) What should you measure to improve performance?. Retrieved from https://www.scien.cx/2019/05/31/what-should-you-measure-to-improve-performance/

MLA
" » What should you measure to improve performance?." Martin Schierle | Sciencx - Friday May 31, 2019, https://www.scien.cx/2019/05/31/what-should-you-measure-to-improve-performance/
HARVARD
Martin Schierle | Sciencx Friday May 31, 2019 » What should you measure to improve performance?., viewed ,<https://www.scien.cx/2019/05/31/what-should-you-measure-to-improve-performance/>
VANCOUVER
Martin Schierle | Sciencx - » What should you measure to improve performance?. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2019/05/31/what-should-you-measure-to-improve-performance/
CHICAGO
" » What should you measure to improve performance?." Martin Schierle | Sciencx - Accessed . https://www.scien.cx/2019/05/31/what-should-you-measure-to-improve-performance/
IEEE
" » What should you measure to improve performance?." Martin Schierle | Sciencx [Online]. Available: https://www.scien.cx/2019/05/31/what-should-you-measure-to-improve-performance/. [Accessed: ]
rf:citation
» What should you measure to improve performance? | Martin Schierle | Sciencx | https://www.scien.cx/2019/05/31/what-should-you-measure-to-improve-performance/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.