From input URL to render page — network protocol


This is the second article in the column from URL input to page rendering network protocol
We know that TCP / IP divides the network protocol into four layers
From input URL to render page -- network protocol

We focus on the application layer, network layer and transport layer

How is data transmitted on the network?

Network layer IP

If data want to be transmitted on the Internet, it must conform to the Internet Protocol (IP) standard. Different online devices on the Internet have unique address identification, which is represented by a number.
Compared with our usual online shopping, we use our receiving address to compare with the unique identification of the device. When we know the receiving address, we can send packages to it. The address of a computer is called an IP address. To visit any website is actually just for your computer to request information from another computer.
If you want to send a packet from host a to host B, the IP address information of host B will be attached to the packet before transmission, so that it can be correctly addressed during transmission. In addition, the IP address of host a will be attached to the packet. With this information, host B can reply to host a. This additional information is loaded into a data structure called IP header.

Let’s take a look at the next easy to understand simplified transmission process of packets from host a to host B (not layer 4 network protocol)

  • The upper layer gives the data packet to the network layer;
  • In the network layer, the IP header is attached to the packet to form a new IP packet, which is handed over to the bottom layer;
  • The bottom layer transmits data packets to host B through physical network;
  • The data packet is transmitted to the network layer of host B, where host B disassembles the IP header information of the data packet and delivers the disassembled data part to the upper layer;
  • Finally, the packet containing the information reaches the upper layer of host B.

Transport layer UDP / TCP

The IP based transmission we discussed above is a very low-level protocol, which is only responsible for sending packets to the other computer, but the other computer does not know who to give the packets to. Therefore, we need to develop protocols that can deal with applications based on IP, that is, transport layer. The most common protocols are UDP and TCP.

By adding the transport layer, we can expand the previous three-tier structure to a four tier structure, as shown in the figure below

From input URL to render page -- network protocol
Next, let’s look at the data transmission route of the added transport layer

  • The upper layer gives the data packet to the transport layer;
  • The transport layer will attach the UDP / TCP header in front of the packet to form a new packet, and then give the new packet to the network layer;
  • In the network layer, the IP header is attached to the packet to form a new IP packet, which is handed over to the bottom layer;
  • The data packet is transmitted to the network layer of host B, where host B splits the IP header information and gives the split data part to the transport layer;
  • In the transport layer, the UDP / TCP header in the packet will be disassembled, and the data part will be given to the upper application according to the port number provided in UDP / TCP;
  • In the end, the packet arrives at the upper application of host B.

So what’s the difference between using UDP and TCP in the transport layer? What scenarios are they suitable for? Let’s take a look

Comparison of UDP and TCP

When using UDP to send data, there are various factors that can lead to packet errors. Although UDP can verify whether the data is correct, UDP does not provide a retransmission mechanism for the wrong packets, it just discards the current packet, and UDP cannot know whether it can reach the destination after sending. Although UDP can’t guarantee the reliability of data, the transmission speed is very fast, so UDP will be applied in some areas that focus on speed but not so strict on data integrity, such as online video, interactive games and so on.

UDP disadvantages:

  • Data packets are easy to be lost in the process of transmission, and there is no retransmission mechanism;
  • Large files will be split into many small packets for transmission. These small packets will go through different routes and arrive at the receiving end at different times. However, UDP protocol does not know how to assemble these packets, so it can not restore these packets to complete files.

In view of the shortcomings of UDP, TCP header not only contains the number of target port and local port, but also provides the sequence number for sorting, so that the receiver can rearrange the packets through the sequence number.
In addition, the transmission can ensure the reliability of data, and provides a retransmission mechanism.
So how does TCP do it? We have to mention the famous “three handshakes” and “four waves”.

Let’s take a look at the TCP transmission process at one time

  • First, establish the connection phase (three handshakes). TCP provides connection oriented communication transmission. Connection oriented refers to the preparation between the two ends before data communication. The so-called three times handshake means that when a TCP connection is established, the client and the server send a total of three packets to confirm the establishment of the connection.
  • Secondly, the data transmission stage. In this stage, the receiver needs to confirm each packet, that is, after receiving the packet, the receiver needs to send the confirmation packet to the sender. Therefore, when the sender sends a packet and does not receive the confirmation message from the receiver within the specified time, it is judged that the packet is lost and the retransmission mechanism of the sender is triggered. Similarly, a large file will be split into many small packets in the process of transmission. When these packets arrive at the receiving end, the receiving end will sort them according to the serial number in the TCP header, so as to ensure the integrity of the data.
  • Finally, the disconnect phase (four waves). After the data transmission, the connection will be terminated.

The following is a classic sketch of three handshakes and four waves on the Internet
From input URL to render page -- network protocol

  • The client initiates the connection, sends a syn packet to indicate the connection is established, and there is a SEQ = x (random number)
  • After receiving it, the server responds to the client and sends a connection request. Send a packet that contains both syn and ACK, where ack = x + 1 and seq = y (random number).
  • After the client receives the response from the server and sees ack = x + 1, it knows that the server has accepted my previous request. Respond to the server’s connection request, and ACK = y + 1

From input URL to render page -- network protocol

  • The first time the client sends out a fin packet and a SEQ = x, then the client enters the fin_ WAIT_ Stage 1
  • After receiving it, the server replies with ack = x + 1 (the principle is the same as above), and seq = y, indicating that it has received the closing request from the client, and then enters close_ Wait status, client enters fin_ WAIT_ 2 status
  • After the server processes its other packages, it sends an ACK = x + 1 and seq = y. at this time, the server enters last_ ACK status, no reply to message
  • After receiving the server’s fin packet, the client replies with ack = y + 1 and enters time_ Wait state: the server directly enters the closed state after receiving the packet. The client does not receive a response after waiting for two MSLS (maximum segment lifetime). It means that the server is normally shut down, and then it also enters the closed state and closes the connection

Here you should understand that in order to ensure the reliability of data transmission, TCP sacrifices the transmission speed of data packets

HTTP protocol

Next, let’s look at the development history of HTTP protocol in the application layer

Http1 Era


First, let’s take a look at the earliest http / 0.9. His appearance is mainly used for academic exchanges, the demand is very simple – used to transfer HTML hypertext content between networks, so it is called hypertext transfer protocol. On the whole, its implementation is also very simple, using the request response mode, sending requests from the client and returning data from the server.

  • Because HTTP is based on TCP protocol, the client should establish TCP connection according to IP address, port and server, and the process of establishing connection is the process of three handshakes of TCP protocol.
  • After the connection is established, a get request line information will be sent, such as get / index.html to get index.html.
  • After receiving the request information, the server reads the corresponding HTML file and returns the data to the client in ASCII character stream.
  • When the HTML document transfer is complete, disconnect.

Generally speaking, the requirement at that time was very simple, that is, it was used to transfer small HTML files. Therefore, the implementation of HTTP / 0.9 has the following three characteristics.

  • The first is that there is only one request line, and there is no HTTP request header and body, because only one request line is needed to fully express the requirements of the client.
  • The second is that the server does not return header information, because the server does not need to tell the client too much information, it only needs to return data.
  • The third is that the returned file content is transmitted in ASCII character stream. Because all the files are in HTML format, it is most appropriate to use ASCII byte code for transmission.


With the development of the Internet, only HTML can be transmitted, which can not meet the demand. It also includes JavaScript, CSS, pictures, audio, video and other different types of files. Therefore, supporting multiple types of file download is a core requirement of HTTP / 1.0, and the file format is not limited to ASCII encoding, there are many other types of encoded files.

In order to enable the client and server to communicate more flexibly, http / 1.0 introduces request header and response header, which are stored in the form of key value. When HTTP sends a request, it will bring the request header information, and when the server returns data, it will first return the response header information. For example, the following code is part of the request header and response header information:

Accept: text / HTML // expect the server to return HTML files
Accept encoding: gzip, deflate, br // it is expected that the server can adopt gzip, deflate or one of the compression methods
Accept charset: iso-8859-1, UTF-8 // indicates that the expected return file code is UTF-8 or iso-8859-1
Accept language: zh CN, Zh // the preferred language of the expected page is Chinese
Content encoding: br // indicates that the server adopts the compression method of BR
content-type: text/html;  Charset = UTF-8 // indicates that the server returns an HTML file, and the encoding type of the file is UTF-8

This is a way of communication between browser and server in the era of 1.0, just like two people are talking about “code”.

Http / 1.0 not only provides good support for multiple files, but also introduces many other features, which are implemented through request header and response header.
Let’s take a look at some of the typical new features:

  • Some request servers may not be able to handle it, or there may be processing errors. At this time, the browser server needs to be informed of the final processing of the request, which introduces the status code. The status code notifies the browser through the response line.
  • In order to reduce the pressure on the server, a cache mechanism is provided to cache the downloaded data.
  • The server needs to count the basic information of the client, such as the number of users in windows and MacOS, so the user agent field is added to the request header.


Although 1.0 can cope with most scenarios, it still has the following defects:

  • Every HTTP communication needs to go through three stages: establishing TCP connection, transmitting HTTP data and disconnecting TCP connection. The method of persistent connection (connection: keep alive) is added. Its feature is that multiple HTTP requests can be transmitted on one TCP connection, as long as the browser or server does not explicitly disconnect, Then the TCP connection will remain( Currently, for the same domain name in the browser, 6 TCP persistent connections are allowed by default.)
  • Head of the team blocking problem – not solved
  • Each domain name is bound with a unique IP address, so a server can only support one domain name. However, with the development of virtual host technology, it is necessary to bind multiple virtual hosts on a physical host. Each virtual host has its own separate domain name, which all share the same IP address. The host field is added in the request header to represent the current domain name address, In this way, the server can do different processing according to different host values.
  • You need to set the complete data size in the response header, such as content length: 901, so that the browser can receive data according to the set data size. However, with the development of server technology, the content of many pages is generated dynamically, so the final data size is not known before data transmission. As a result, the browser does not know when it will receive all the file data, The server will divide the data into several data blocks of any size. When each data block is sent, the length of the previous data block will be attached. Finally, a zero length block will be used as the mark of sending data. This provides support for dynamic content.


Although http / 1.1 has adopted many strategies to optimize the loading speed of resources, and has achieved certain results, the bandwidth utilization rate of HTTP / 1.1 is not ideal, which is also a core problem of HTTP / 1.1( Bandwidth is the maximum number of bytes that can be sent or received per second. The maximum number of bytes that can be sent per second is called uplink bandwidth, and the maximum number of bytes that can be received per second is called downlink bandwidth.) The reason for this problem is mainly caused by the following three reasons.

  • Slow start of TCP. Once a TCP connection is established, it will enter the state of sending data. At the beginning, the TCP protocol will use a very slow speed to send data, and then slowly speed up the speed of sending data until the speed of sending data reaches an ideal state. We call this process slow start (similar to car starting process). Slow start is a strategy of TCP to reduce network congestion, we can’t change it. The reason why slow start will bring performance problems is that some key resource files commonly used in the page are not large, such as HTML files, CSS files and JavaScript files. Usually these files need to initiate requests after the TCP connection is established, but this process is slow start, so it takes a lot more time than normal, This delays the precious first rendering of the page.
  • Multiple TCP connections are opened at the same timeThese connections compete for fixed bandwidth. The system establishes multiple TCP connections at the same time. When the bandwidth is sufficient, the sending or receiving speed of each connection will slowly increase; Once the bandwidth is insufficient, these TCP connections will slow down the speed of sending or receiving. For example, if a page has 200 files and uses three CDNs, 6 * 3, that is, 18 TCP connections, need to be established to download resources when loading the page; In the process of downloading, when the bandwidth is found to be insufficient, each TCP connection needs to dynamically slow down the speed of receiving data. This will cause a problem, because some TCP connections download some key resources, such as CSS files and JavaScript files, while some TCP connections download ordinary resource files such as pictures and videos. However, multiple TCP connections cannot negotiate which key resources should be downloaded first, which may affect the download speed of those key resources.
  • Team leader blocked. We know that when using persistent connection in http / 1.1, although a common TCP pipe can be used, only one request can be processed at the same time in a pipe. Before the end of the current request, other requests can only be blocked.

Http2 introduces the famous multiplexing technology to solve the above three problems.

  • A domain name uses only one TCP long connection and eliminates queue head blocking.
  • The request is divided into frame by frame data transmission, so that the request can be parallel.

How to make the request after adding multiplexing technology?

  • First of all, the browser is ready to request data, including the request line, request header and other information. If it is a post method, then there must be a request body.
  • After the data is processed by the binary framing layer, it will be converted into a frame with the request ID number, and these frames will be sent to the server through the protocol stack.
  • After the server receives all the frames, it will combine all the frames with the same ID into a complete request information.
  • Then the server processes the request and sends the response line, response header and response body to the binary framing layer.
  • Similarly, the binary framing layer will convert these response data into frames with request ID number, and send them to the browser through the protocol stack.
  • After the browser receives the response frame, it will submit the frame data to the corresponding request according to the ID number.

Through the above analysis, we know that multiplexing is the core function of HTTP / 2, which can realize the parallel transmission of resources. Multiplexing technology is based on the binary frame layer. Based on the binary framing layer, http / 2 also implements many other functions. Let’s take a brief look.

  1. You can set the priority of the request. We know that some data in the browser is very important, but when sending requests, important requests may be later than those less important requests. If the server replies to the data according to the order of requests, then the important data may be delayed for a long time before it can be delivered to the browser, which is very unfriendly for the user experience. In order to solve this problem, http / 2 provides the request priority, which can be marked with the priority of the request when sending the request, so that the server will give priority to the higher priority request after receiving the request.
  2. Server push. Http / 2 can also directly push data to the browser in advance. When a user requests an HTML page, the server knows the JavaScript file and CSS file that the HTML page will refer to. After receiving the HTML request, the server will send the CSS file and JavaScript file to the browser together, so that after the browser has parsed the HTML file, it can directly get the required CSS file and JavaScript file, This plays a crucial role in the speed of opening a page for the first time.
  3. Head compression. Http / 2 compresses the request header and response header. On the one hand, the header information is compressed by gzip or compress and then sent; On the other hand, the client and the server maintain a header information table at the same time. All the fields will be stored in this table and an index number will be generated. In the future, the same field will not be sent, only the index number will be sent. This improves the speed.

Recommended Today

Looking for frustration 1.0

I believe you have a basic understanding of trust in yesterday’s article. Today we will give a complete introduction to trust. Why choose rust It’s a language that gives everyone the ability to build reliable and efficient software. You can’t write unsafe code here (unsafe block is not in the scope of discussion). Most of […]