In the performance guide, the user-centered guidance model is introduced, and then take a look at some performance indicators in the actual measurement. The original user centric performance metrics was still under Web fundamentals some time ago. Today, when I went to see it, it became an external chain, but the Chinese version is still under Web fundamentals. The following is part of the personal understanding of the translation.
Last updated: Nov 8, 2019.
- My GitHub
We’ve all heard about how important performance is. But when we talk about performance and making the site “fast,” what exactly are we talking about?
In fact, performance is relative:
- A site may be fast for one user (on a fast network with powerful devices), but it may be slow for another user (on a slow network with low-end devices).
- Both sites may finish loading in exactly the same time, but one seems to load faster (if it loads content step by step, rather than wait until the end of the day to display anything).
- A site may appear to load quickly, but then respond slowly (or not at all) to user interaction.
Therefore, when it comes to performance, it is important to measure it with accurate and quantifiable objective criteria. These criteria are called indicators.
However, just because an indicator is based on objective criteria and can be measured quantitatively, it does not necessarily mean that these measurements are useful.
Historically, early web performance was measured by load events. However, although load is a well-defined moment in the page life cycle, it does not necessarily correspond to anything that users care about.
For example, the server can respond with a minimal page that triggers immediately
loadEvent, but then all the content in the page will be retrieved asynchronously and displayed. This process may be in the
loadOccurs several seconds after the event is triggered. Although this page has a very fast loading time technically, this time is not consistent with the actual page loading experience of users.
Over the past few years, members of the chrome team, working with the W3C web performance working group, have been working to standardize a new set of APIs and metrics that more accurately measure how users experience the performance of web pages.
To help ensure that indicators are user relevant, we define them around several key issues:
|Is it happening?||Did the navigation start successfully? Is the server responding?|
|Is it useful?||Is there enough content for users?|
|Is it usable?||Can users interact with the page, or are they busy?|
|Is it delightful?||Is the interaction smooth and natural without lag and interference?|
How metrics are measured
Performance standards are usually measured in the following ways:
- In the lab: use tools to simulate page loading in a consistent controlled environment
- In the field: real users actually load and interact with the page.
Neither of these two options is necessarily better or worse than the other, and in fact, you usually want to use both to ensure good performance.
In the lab
When developing new features, it is necessary to test performance in the laboratory. It is impossible to measure the performance characteristics of new features on actual users before they are released, so testing them in the laboratory before releasing them is the best way to prevent performance regression.
In the field
On the other hand, while laboratory testing of performance is a reasonable way, it doesn’t necessarily reflect how all users experience your site in the wild.
Depending on the user’s device function and their network conditions, site performance may vary greatly. Whether or not users interact with the page also has an impact.
In addition, page loading can be uncertain. For example, with sites that load personalization or advertising, users may experience very different performance characteristics. Laboratory tests will not capture these differences.
The only way to really understand how well your site performs for users is to measure its performance as users load and interact. This type of measurement is often referred to as real user monitoring
Types of metrics
There are other types of metrics that relate to how users perceive performance.
- Perceived load speed: the speed at which a page loads and presents all its visual elements to the screen.
- Runtime responsiveness: the speed at which the page responds to user interaction after the page is loaded.
- Visual stability: do elements on the page move in ways that users don’t expect and may interfere with their interaction?
- Smoothness: are transitions and animations rendered at a consistent frame rate and smoothly transition from one state to the next?
From the above several types of performance indicators, we can clearly know that no single indicator can capture all the performance characteristics of the page.
Important metrics to measure
- First content paint (FCP): measures the time from the beginning of the page loading to the time when any part of the page is rendered to the screen. （lab，field）
- Large content paint (LCP): measures the time from the beginning of page loading to the maximum text block or image element rendering to the screen. （lab，field）
- First input delay (FID): measures the time from the first time a user interacts with your site to when the browser can actually respond to the interaction. （field）
- Time to interactive (TTI): measures the time from the beginning of a page to its visualization, its initialization scripts (if any) loaded, and its ability to respond reliably and quickly to user input. （lab）
- Total blocking time (TBT): MeasurementFCPandTTIThe total time spent blocking the main thread for a long time blocking the input response. （lab）
- Cumulative layout shift (CLS): the measurement is loaded from the beginning of the page until its lifecycle state changes toHiddenCumulative score of all unexpected layout moves that occur between. （lab，field）
While this list contains metrics for many different aspects related to users, it does not cover all aspects (for example, runtime responsiveness and smoothness are not currently covered).
In some cases, new metrics will be introduced to make up for the missing parts, but in others, the best metrics are tailored for your site.
The performance metrics listed above will help you understand the performance characteristics of most sites on the web as a whole. They can also provide a common set of metrics for sites to compare their performance to competitors.
However, sometimes sites that are unique in some ways require additional metrics to capture a performance panorama. For example, in a large content paint (LCP), it is possible that the largest element is not part of the main content of the page, so the LCP may not be relevant.
To cope with this situation, the web performance working group also standardizes lower level APIs that are useful for implementing custom metrics:
- User Timing API
- Long Tasks API
- Element Timing API
- Navigation Timing API
- Resource Timing API
- Server timing
You can see a more detailed introduction in custom metrics.
- User-centric performance metrics