Performance optimization is a trade-off philosophy

Time:2021-1-20

Master moves are often not to consider who knows more knowledge, but to attract the enemy!

At the beginning of learning to do performance optimization, many people just grasp the eyebrows and moustaches, and do everything they know, want to do, and can do. This is a typical hardworking force!

I did a lot of hard work and fell to the ground. The performance has been optimized. But you have to know that the code is growing up one by one, it is not where you optimize it, it will not move, so after a short time, it is an unimaginable scene!

The master who really understands optimization is the one who really understands the philosophy of “weighing”. In fact, when we say that a person is weighing, the underlying logic is actually the comparison of data within his cognitive scope, which is the 2 / 8 principle, priority, four quadrant rule, time management, human resources management, cost management, grasping the big and letting go the small, and even reviewing the time!

Environmental judgment

First of all, we should know what kind of environment we are in. It is to optimize the mobile page or the PC page, the first screen page or the secondary page, the first hop scene or the second hop scene.

KPI

People who only know how to work but don’t know how to get money are generally called fools. So at the beginning, you need to know exactly what can help you earn money.

If you pay attention to the common work orders of customer service and sales teams, and study the reasons for the high jump out rate and low conversion rate, you will find that some indicators that you think are very important are not so important (such as the function pages that are not used in a quarter).

I think now you know that what I’m talking about is business value. You must spend every cent on the blade!

Suppose you want to be at least 20% faster than your competitors, then you need to choose your goals.

  • TTFB(Time To First Byte):

    • The first byte time is an important index to reflect the response speed of the server
    • The time when the browser starts to receive the response data from the server = background processing time + redirection time
  • FID(First Input Delay):

    • The time from the user’s first interaction with the website to the time when the browser can actually respond to the interaction
    • Can be integratedweb-vitals
  • First paint time

    • The user starts from opening the page until something appears on the page
    • (chrome.loadTimes().firstPaintTime - chrome.loadTimes().startLoadTime)*1000
    • window.performance.timing.responseEnd – window.performance.timing.fetchStart
  • Large content paint
  • Resource size

Do you think these are the most important things you should pay attention to. But this is not enough, because you are confused. If this happens:
Under 3G environment

  • FID < 100ms: the time from the user’s first interaction with the website to the browser’s actual response to the interaction
  • TTI < 5S: the layout is stable, key web fonts are visible, and the main thread is idle to process user input
  • LCP < 3S: mark the time point when the important content of the page has been loaded in the viewable area
  • Key file size < 170kb (after gzip compression)

Do you think you can do it! To think about a problem, what is the problem of your page now? What influences these indicators? Is there a measurable tool?

performance monitoring

When you get a problem project, you can explore the problem from the following perspectives:
Is there any unused interface still being called;
Did you join a very large third-party package and not go CDN;
Is there any concurrent request blocking the loading of resources? For example, a page calls more than ten interfaces;
If there are very large business resources, because the input is not controlled, there are business resources of super normal size;
….
These problems are encountered in the development of large-scale projects, because it has a very famous name called “problems left over by history”, so it is easy to become a nail in the door.

The browser has a performance panel with visualization function. But you need to know who values this data more. At the same time, this data is not conducive to sharing. The most important thing is that it can’t automatically monitor and measure anytime, anywhere and continuously. So there is usually a self-developed SDK to getwindow.performanceAfter the secondary processing of the data, we can make a deeper visualization. Useful visualization data can drive a long-term performance focused team culture.

Underlying logic of performance optimization

In fact, if you think about it deeply, you will know that the links of performance optimization are nothing more than network loading, rendering optimization, file optimization and user experience. In fact, DNS, CDN, TCP, cache at all levels, gzip compression, code compression confusion are basically considered in the architecture layer. Once and for all, you hardly need to worry about it! User experience plus a loading, progress bar, skeleton screen and so on is also a general solution.

You’re always thinking about solutions in three ways:

  • Reduce intermediate links in the whole link: for example, change serial to parallel.
  • Preload, pre execute and lazy load as much as possible
  • Progressive, segmented

The SSR value is not worth it

SSR is that when the user requests a page for the first time, the server will render the required component or page into an HTML string, and then return it to the client.
Usually, when people think about SSR solutions, they are trying to solve the problem of SEO under CSR and the slow loading speed of the first screen. Here we focus on performance. Let’s think about which part of the time is optimized for the straight out plan? What’s left out isFront end renderingas well asAjax requestLet’s have a good time! Because it puts all the computation on the server.

But HTML will be bigger than CSR file!!! At the same time, users who are far away from the server will have a long white screen!

Offline package or PWA

So usually the mobile terminal will pass throughOffline packet technologyTo solve the problem of HTML file loading time. The basic idea of offline package is to intercept URL uniformly through web view, map resources to local offline package, detect, download and maintain resources in local cache directory when updating. For example, Tencent’s webso and alloykit’s offline package solutions. Offline package is a relatively transparent and less invasive solution for web.

PWAIt is to speed up and optimize the loading performance through the pure web scheme. It caches static resources through cachestorage.
But in the traditional HTTP cache scheme, we generally do not cache HTML. This is because the HTML of CSR is an empty shell. We usually set a larger Max age, so that users will always see the old page within the browser cache expiration time.
For straight out HTML, with PWA, it caches the HTML file from the background to the cachestorage. In the next request, it takes priority to get it from the local cache. At the same time, it initiates a network request to update the local HTML file.

Then you will find a new problem, that is, it takes time to load HTML resources at the first startup. You can pre load a JS script through the app to pull the page that needs PWA cache and complete the cache ahead of time!

For the front-end, PWA will undoubtedly be a better solution, but PWA is not omnipotent, it has compatibility problems. For example, only HTTPS is supported.
In addition, the offline packet scheme and PWA scheme can have a data PK. In fact, many students are on the PWA, but never thought if you try to remove it, whether the data will change.

Cost of SSR

At this stage, all the solutions we know are based on node, which means that to access SSR, we must have the node middle layer. That means that you need to have the manpower to hold the operation and maintenance and architecture capabilities of the node, as well as the hardware cost of the server. At the same time, due to the difference of server rendering, the client will be normal and the server will be abnormal. The separation of front-end and back-end may lead to the problem of smooth Publishing: when the static resources (JS, CSS) of the page are not published together with the back-end, the HTML content returned by the back-end may not match the front-end JS, CSS content. If you don’t do compatibility processing, there may be style confusion or the document selector can’t find the element.

So add SSR in the end is worth it, pony cross the river! Generally, we suggest that there are many optimization schemes for the first screen rendering experience and SEO, and do not use SSR as a last resort.

NSR

If the cost of putting SSR on the server side is relatively high, is it possible to put it on the client side. With the help of the browser, a JS runtime is enabled to render the downloaded HTML template and the prefetched feed stream data in advance, and then the html is set to the memory cache at the memory level, so as to achieve the effect of click to see.

This is a scheme to distribute the background request pressure to each client. At the same time, because the client has data prefetching and preloading, the speed can reach seconds. But there’s another problem. Preloading means you have to be a fortune teller!

ESI (Edge Side Include)

What’s more, if it’s the first jump page, what’s preloading, pre executing and pre rendering, stay by it! Besides the server and client, is there another place where we can put resources? By the way, it’s the proxy side, such as CDN.
Performance optimization is a trade-off philosophy
CDN is closer to the user than the server and has shorter network delay. In the CDN node, the static part of the page that can be cached is quickly returned to the user. At the same time, the dynamic part content request is initiated on the CDN node, and the dynamic content is returned to the user after the response flow of the static part.
Performance optimization is a trade-off philosophy

  • The first screen ttfb will be very short, and the static content (such as page header, basic structure and skeleton diagram) can be seen quickly.
  • Dynamic content is initiated by CDN, which is earlier than traditional browser rendering, and does not rely on the browser to download and execute JS. In theory, the final reponse completion time is the same as that of accessing the server directly to get the complete dynamic page.
  • After the static content is returned, you can start parsing part of HTML, downloading and executing JS and CSS. Some operations of blocking pages are carried out ahead of time, and the dynamic content can be displayed more quickly after the full dynamic content is streamed back.
  • Compared with the network between client and server, the network between edge node and server has more optimization space. For example, through dynamic acceleration and connection reuse between edge and server, TCP connection establishment and network transmission overhead can be reduced for dynamic requests. In order to achieve the return time of the final dynamic content, it is faster than the client directly accessing the server.

ESR relies on the edge computing power of CDN (it ensures that CDN can perform operations similar to service worker, and can program requests and responses flexibly). If CDN service providers do not support ESR, there is no need to talk about it!

What to use and what to load

Lazy loading

  • Do not load all the contents of the file at a time, split it in advance, and only load the part needed at the moment
  • When more content is needed, the used content will be loaded immediately

Routing level

  • require.ensure(dependencies, callback, chunkName)
  • Bundle-Loader

Module level

  • import()

Content level
Photo lazy load:

Progressive

Take your package apart and load it little by little. Put the least important to the last load.
Through the code spitting configuration of webpack, split chunks, external, DLL and so on

Delete the useless code

  • Redundant code at module level: tree shaking
  • Fragmented redundant code (such as console statements, comments, etc.): optimization.minimize && optimization.minimizer
  • Delete redundant declaration statement: scope hoisting

How to write the code

  • Switch firstdisplay:noneModify the style again
  • adoptclassBatch modify styles by switching
  • usetransformAttribute to manipulate animation
  • Meta viewport (can speed up page rendering)
  • <img>Labeledloadingattribute
  • Using Polyfill service
  • Reasonable useCanvasSubstitute for moreDOM Tree
  • userequestAnimationFrame
  • useshouldComponentUpdate
  • Clean the timer in time
  • userequestIdleCallback
  • usePureComponent
  • useimmutable-js
  • useFragmentlabel
  • useElement.getBoundingClientRect()Get visual area
  • useVDOM
  • usedocument.createDocumentFragment
  • Use anti shake and throttle
  • Large file slice upload
  • Time slicing
  • Virtual list
  • algorithm

summary

My macro level monitoring and data KPI In addition, network status, CDN, ISP, cache coverage, agent, third-party script, parser blocking mode, disk I / O, IPC delay, anti-virus software and firewall, load balancing, background CPU tasks, and server configuration have a significant impact on Web performance We are familiar with the aspects that end-users usually can’t or pay less attention to, as well as the processing of static resources such as pictures and fonts, because there are many articles in this area on the Internet.

Students who have studied algorithms should know that acceleration is essentially exchanging more network, memory and CPU for speed and space for time!
You need to know that when you do performance optimization, your decision is right in some scenarios and wrong in some scenarios. That’s the trade-off! Automatically monitor and measure your project performance anytime, anywhere and continuously, which is a sign of mature performance optimization, because your environment and requirements are always changing.

Finally, a case study website with performance optimization is recommendedWPO Stats, this website has a lot of good performance optimization cases to share.

reference material

Front end performance optimization: when page rendering encounters edge computing
2020 front end performance optimization checklist
FID of interactive response performance

Recommended reading
Architecture diagram tools that architects like to use
Tree shaking: Metaphysics
Senior headhunters tell you why you can’t find a good job?
Using webhooks and netlife functions to generate GitHub profile readme automatically and synchronously
[the road of architect selection] – open source library for file conversion](http://mp.weixin.qq.com/s?__b…
How to implement business components in the company
Mysterious disappearance of documents, brain burning case solved

Think this article is helpful to you? Please share with more people
Focus on “front end dew” plus star logo, learn front end from sister Lu
Performance optimization is a trade-off philosophy

Recommended Today

Practice of query operation of database table (Experiment 3)

Following the previous two experiments, this experiment is to master the use of select statements for various query operations: single table query, multi table connection and query, nested query, set query, to consolidate the database query operation.Now follow Xiaobian to practice together!Based on the data table (student, course, SC, teacher, TC) created and inserted in […]