Summary of HTTP related optimization in front end optimization

Time:2019-12-10

Learning and summary articles are published synchronously at https://github.com/xianshanna…, if you are interested, you can pay attention to them and learn and progress together.

HTTP optimization is an important part of front-end performance optimization, and also one of the necessary knowledge points of front-end.

Reduce static resource file size

This is the most fundamental way. If there is a static resource file of more than 10 megabytes, and the size is not reduced, even if the optimization is to the extreme, the user experience is no better.

If the whole web page is a 2KB resource file, it will not be optimized quickly.

Code level optimization

  • Currently, the web pack tree sharing function has been automatically processed, and the third-party dependency package has been used as little as possible (depending on the situation, of course).
  • Code segmentation(code splitting), different pages load their own code, not the code of other pages (in fact, it also belongs toLazy loading)。

Transport layer optimization

HTTP inheritance enables compressed transmission.

Generally, when we open gzip, we can basically compress about 6 times (generally, the larger the file, the greater the string similarity rate, and the greater the compression rate).

After compression by server, HTTP response headerContent-EncodingSet to the corresponding compression mode, and the browser will decompress automatically.

Content-Encoding: gzip

Of course, there are other compression methods, such as compress, deflate and so on. Currently, gzip is the most widely used.

Appropriate merge or scatter requests

Consolidation or decentralization requests need to be based on the actual situation.

Consolidation request

http 1.1Before (including HTTP1.1), there were concurrent restrictions on browsers with the same domain name. Google currently has the same domain name and found that for six requests, other browsers are more or less, but not much worse.

If it is usedhttp1.1For web services, the number of resources that we load for the first time should be less than 4. Therefore, if there are too many static resource requests, we need to merge them according to the situation.

Decentralized request

Http2.0There is no problem of concurrency with the domain name. We can decentralize the requests appropriately. Of course, ifHttp1.1A resource file is too large, and then concurrency does not reach the limit. You can also split the resource file to achieve the purpose of decentralized requests.

Use preload

Preloading in some cases can greatly improve the loading speed and prompt the user experience.

Preloading requires understandingpreloadandprefetchKnowledge.

Preload DNS

DNS resolution also needs time, especially on the mobile side. We can pre resolve DNS to reduce the time of DNS resolution for non domain names.

    <link rel="dns-prefetch" href="//example.com">

There’s actually another onepreconnectpreconnectNot only the DNS pre resolution is completed, but also the TCP handshake and the transport layer protocol are established. However, the browser is compatible and cannot be used at present.

<link rel="preconnect" href="http://example.com">

Preload static resources

Using preload

adoptpreloadGenerally preloadedCurrent pageImages, fonts, JS scripts, CSS files and other static resource files to be used.

Scene 1

If necessary, you can perform these preloads completely scripted. For example, let’s create aHTMLLinkElementInstance, and then attach them to the DOM:

var preloadLink = document.createElement("link");
preloadLink.href = "myscript.js";
preloadLink.rel = "preload";
preloadLink.as = "script";
document.head.appendChild(preloadLink);

This means that the browser will preload the JavaScript file, but not actually execute it.

If you want to execute it, you can execute it when necessary:

var preloadedScript = document.createElement("script");
preloadedScript.src = "myscript.js";
document.body.appendChild(preloadedScript);

This is especially useful when you need to preload a script, but you need to delay it until you need to.

Scene two

The font will not be loaded until it is used (if the font is a custom font, HTTP request will be sent to load the font).

Because of this feature, we can preload the font. When the font is used, the font has been loaded without waiting for loading.

As follows, when we don’t preload, the code can also run, but font loading needs to wait until the JS and CSS resources of the page are loaded, and the font will be loaded when the current page is used:

<style>
  @font-face {
    font-family: Test-Number-Medium;
    src: url(./static/font/Test-Number-Medium.otf);
  }
</style>

We add:

<link rel="preload" href="./static/font/Test-Number-Medium.otf">

It can be submitted for loading, saving most or even all of the font loading time, generally all of the time, because JS resource files are much larger than fonts(Parallel download, the longest resource load time, determines the maximum load time)。

Using prefech

prefetchGenerally preloadedNon current pageResources,prefetchIs a low priority resource prompt that allowsIn the background (when idle), the browser obtains the resources that may be used in the future and stores them in the browser’s cache。 After the current page is loaded, the resources marked d with prefetch will be downloaded. Then when the user enters another page, the resources already prefetched can be loaded from the cache immediately.

However, prefech has few application scenarios.

<link rel="prefetch" href="/uploads/images/pic.png">

Lazy loading

Picture lazy loading

In general, this method only loads the response pictures when the user scrolls to the response position (from the user experience, of course, it needs to be loaded a little in advance), which is basically done on the Internet with many pictures (such as video website).

Or when viewing the pictures on the slide, the user will load them when checking the next picture, instead of loading all the pictures at once.

Lazy loading of JS

When relevant JS is needed, it is created dynamically<script>Tag to load JS file lazily, such as webpack’scode splitting

Rational use of defer and async

HavedeferorasyncThe script resources of the property will be downloaded in parallel, and will not affect page parsing,This saves script download time

There are two differences:

HavedeferThe attribute resources will appear on the page in order. After the resource is loaded, theDOMContentLoadedEvents are executed in sequence before they are called.

HaveasyncThe resource of attribute is executed immediately after downloading, which may be executed inDOMContentLoadedBefore or after the event, multiple withasyncThe resource of property has no execution order. Whoever loads the resource first will execute it first.

So why save download time? Let’s compare.

deferResource loading is similar to putting<script>Put on</body>Before, butdeferResources can be combined with<head>If you download resources in parallel, you can save part or even all of the download time (see<head>Resources anddeferResource size). So how can we properlydeferPut resources on<head>, some scenes can improve a certain speed.

asyncResources are a bit like</body>FormerscriptAfter the resource is loaded, create it dynamically<script>The tag loads the resource but waits for the page JS file to execute before it can be loadedasyncResources do not need to waitAdvance loading, you can save a certain amount of loading time. So it is more suitable to load such asGoogle AnalyticsBaidu statisticsLog reportOther types of JS resources, are independent operation does not affect the page’s auxiliary JS resources.

Using cache

Caching is a great optimization for accessing the same resources again. Caching is the only way for HTTP optimization. For the static resource files, such as CSS and JS, we usually use strong cache (for example, cache for 30 days). Strong cache does not need to request static resources from the service again.
However, if the strong cache is not used properly, it will cause unexpected bugs to users. For example, the entry HTML file cannot be strongly cached. Otherwise, after the version is updated, users will not be able to access the new version of the page during the caching period.

For details, see my other caching related article, HTTP caching of browsers.

Use http2.0

Http2.0 multiplexing solves the problem of multiple domain names and finds that it can save the overall download time of resources, as well as the compression of request headers and differential transmission can also improve the transmission efficiency.

Multiplexing

HTTP 1.1 persistent connection solves the problem of connection reuse, but there is still a problem. A TCP cannot process requests concurrently: in a TCP connection, only one request can be sent at a time, and the second request can be sent only after the response is completed.

HTTP 2.0 usesMultiplexingAnd the number of concurrent requests is several orders of magnitude larger than that of HTTP 1.1.

Of course, HTTP 1.1 can also establish several more TCP connections to support processing more concurrent requests, but creating a TCP connection itself also has overhead.

TCP connection has a process of preheating and protection. First, check whether the data has been successfully transmitted. Once it has been successfully transmitted, slowly increase the transmission speed. As a result, the response of the server will be slower for instantaneous concurrent connections. So it’s best to use an established connection that can support instantaneous concurrent requests.

What optimizations can multiplexing bring?

With the multiplexing feature, it is unnecessary for browsers to limit the number of links to the same domain name (Google in HTTP 1.1 supports up to 6 persistent links for concurrent requests of the unified domain name).

Then we can split the resources according to the actual situation, so as to save the download time. In the case of no concurrent request limit, the download time is calculated according to the maximum time of parallel download, without waiting for the last resource download, we can download another resource. In the case of more resources, this will greatly improve the overall download speed of resources.

FormerSprite diagram of CSSOptimization means, in the characteristics of multiplexing, is no longer necessary.

Multiplexing also leads to the optimization of low latency, which is one aspect of speed improvement.

Binary framing

A binary framing layer is added between the application layer and the transmission layer, so as to break through the performance limitation of HTTP 1.1, improve the transmission performance, realize low latency and high throughput without changing the HTTP semantics, HTTP method, status code, URI and the first field.

In the binary framing layer, http2.0 will divide all the transmitted information into smaller messages and frames, and encode them in binary format. The first information of http1. X will be encapsulated in headers frame, and our request body will be encapsulated in data frame.

First field compression

HTTP 2.0 useHPACK The algorithm compresses the data of the first field, so that the data volume is small and the transmission on the network is faster.

First field differential transmission

HTTP 2.0 specifies that it will be used and maintained on the client and server sideHeader tableTo track and store previously sent key value pairs forSame head, no need to send through the request again, just once.

In fact, if the request does not contain the first field (for example, a polling request for the same resource), and the server automatically uses the first field sent by the previous request, then the first field cost is zero bytes.

If the header changes, only the changed data needs to be sent in the headers frame, and the new or modified header frame will be appended to theHeader tableHeader tableHTTP 2.0 always exists during the lifetime of the connection, and is updated gradually by both the client and the server.

Using CDN

Strictly speaking, CDN is not HTTP optimization, and the front-end can’t deal with it directly, which is an operation and maintenance matter. CDN node can solve the problem of cross operator and cross region access and improve the access speed.

The full name of CDN is content delivery network, that is, content distribution network. CDN is a content distribution network built on the network. Depending on the edge servers deployed in various places, through the load balancing, content distribution, scheduling and other functional modules of the central platform, users can get the required content nearby, reduce network congestion, and improve user access response speed and hit rate. The key technologies of CDN mainly include content storage and distribution. ——Encyclopedia of Science

What are the advantages of CDN?

The biggest advantage of CDN is to improve the access speed of user resources, so CDN is a good optimization point for static resources.
Distributed server, users access nearby, CDN node can solve the problem of cross operator and cross region access, while dispersingSource serverAccess pressure.

There is another advantage:
No cookie transfer (this is not exactly an advantage). Static resources generally do not need cookies. Static resources placed in different domain names can reduce a certain degree of bandwidth and improve a certain access speed. Although a single request is not obvious, there will be a qualitative difference when there are too many.

How is CDN decentralizedSource serverOf pressure?

There are two core points of CDN: one is cache, the other is back to source.

Decentralized through caching and back to source policiesSource serverPressure. First, the resources requested from the root server are cached as required. Then when a user accesses a resource, if the resolved CDN node does not cache the response content, or the cache has expired, it will go back to the source station to get it.