Review the differences between HTTP 1.0, HTTP 1.1 and HTTP 2.0


Some differences between HTTP 1.0 and HTTP 1.1

Cache processing

In HTTP 1.0, the header is mainly usedIf-Modified-Since(compare whether the last update time of resources is consistent),Expires(the expiration time of the resource (depending on the local time of the client)) is used as the criterion for cache judgment.

HTTP 1.1 introduces more cache control strategies:

  • Entity tag: matching information for resources
  • If-Unmodified-Since: compare whether the last update time of the resource is inconsistent
  • If-Match: compare etags for consistency
  • If-None-Match: compare etags for inconsistencies

Wait for more alternative cache headers to control the cache policy.

Bandwidth optimization

In HTTP 1.0, there are some phenomena of wasting bandwidth. For example, the client only needs a part of an object, but the server sends the whole object, and does not support the breakpoint continuation function.

HTTP 1.1 supports breakpoint continuation by default.

Host header processing

In HTTP 1.0, it is considered that each server is bound with a unique IP address, so the URL in the request message does not pass the host name. However, with the development of virtual host technology, there can be multiple virtual hosts on a physical server, and they share an IP address. The request message and response message of HTTP 1.1 should support the host header field, and if there is no host header field in the request message, an error will be reported (400 bad request).

Long connection

HTTP 1.0 requireskeep-aliveParameter to tell the server to establish a long connection, and HTTP 1.1 supports long connections by default, which makes up for the disadvantage that HTTP 1.0 has to create a connection every request.

HTTP is based on TCP / IP protocol. Creating a TCP connection requires three handshakes, which has a certain overhead. If the connection needs to be re established for each communication, it will affect the performance. Therefore, it is best to maintain a long connection, which can be used to send multiple requests.

HTTP 1.1 supports long connection and request pipelining. Multiple HTTP requests and responses can be transmitted on a TCP connection, reducing the consumption and delay of establishing and closing connections.

Management of error notification

In HTTP 1.1, 24 error status response codes are added. For example, 409 (conflict) indicates that the requested resource conflicts with the current state of the resource; 410 (Gone) indicates that a resource on the server has been permanently deleted.

Add request method

  • Put: requests the server to store a resource
  • Delete: requests the server to delete the identified resource
  • Options: request to query the performance of the server, or query the options and requirements related to resources
  • Connect: reserve the request for future use
  • Trace: the request server sends back the received request information, which is mainly used for testing or diagnosis

Differences between HTTP 2.0 and HTTP 1. X

The defect of HTTP version 1. X can be summarized as follows: thread blocking. At the same time, there is a certain number of requests for the same domain name, and requests exceeding the limit will be blocked.

Binary framing

The parsing of HTTP 1. X is text-based. There are natural defects in format parsing based on text protocol, and the expression forms of text are diverse. To achieve robustness, there must be many scenarios to consider, but binary is different. Only the combination of 0 and 1 is recognized. Based on this consideration, the protocol parsing of HTTP 2.0 decides to adopt binary format, which is convenient and robust.

HTTP 2.0 adds a binary framing layer between the application layer (HTTP 2.0) and the transport layer (TCP / UDP). Without changing the semantics, methods, status codes, URIs and header fields of http1. X, it solves the performance limitations of HTTP1.1, improves transmission performance, and realizes low latency and high throughput. In the binary framing layer, http2.0 will divide all transmitted information into smaller messages and frames, and encode them in binary format. The header information of http1. X will be encapsulated in the header frame, and the corresponding request body will be encapsulated in the data frame.

  • Frame: the smallest unit message of HTTP 2.0 data communication: refers to the logical HTTP message in HTTP 2.0. For example, request and response, the message consists of one or more frames.
  • Stream: a virtual channel that exists in a connection. Streams can carry two-way messages, and each stream has a unique integer ID.


Multiplexing allows multiple request response messages to be initiated simultaneously over a single HTTP 2.0 connection. That is, connection sharing improves the utilization of connections and reduces latency. That is, each request is used as a connection sharing mechanism. A request corresponds to an ID. in this way, there can be multiple requests on a connection. The requests of each connection can be randomly mixed together. The receiver can re attribute the requests to different server requests according to the request ID.

In the HTTP 1.1 protocol, the browser client has a certain number of requests under the same domain name at the same time. Requests that exceed the limit are blocked. This is one of the reasons why some sites have multiple static resource CDN domain names.

Of course, HTTP 1.1 can also establish several more TCP connections to support processing more concurrent requests, but creating TCP connections itself has overhead.

TCP connection has a warm-up and protection process. First check whether the data transmission is successful. Once successful, slowly increase the transmission speed. Therefore, the response of the server will slow down for transient concurrent connections. Therefore, it is best to use an established connection, and this connection can support instantaneous concurrent requests.

HTTP 2.0 can easily realize multi stream parallelism without relying on establishing multiple TCP connections. The same domain name only needs to occupy one TCP connection, eliminating the delay and memory consumption caused by multiple TCP connections. HTTP 2.0 reduces the basic unit of HTTP protocol communication to one frame, which corresponds to the messages in the logical flow. Two way exchange of messages on the same TCP connection in parallel.

Header compression

The header of http1. X contains a large amount of information and is sent repeatedly every time. Http2.0 uses hpack algorithm to compress the header data to reduce the size of the header to be transmitted. Both sides of the communication cache a header fields table and update the HTTP header differently, which not only avoids repeated header transmission, but also reduces the size to be transmitted.

Compression strategy adopted by header:

  • HTTP 2.0 uses the “header table” on the client and server to track and store previously sent key value pairs. For the same data, it is no longer sent through each request and response;
  • The header table always exists during the connection duration of HTTP 2.0 and is gradually updated by the client and server;
  • Each new header key value pair is either appended to the end of the current table or replaces the previous value in the table.

Server push

Server push is a mechanism that sends data before the client requests.

The server can actively push other resources when sending HTML pages, instead of waiting for the browser to parse to the corresponding location, initiate a request and then respond. For example, the server can actively push JS and CSS files to the client without sending these requests when the client parses HTML.

These resources pushed by the server actually exist somewhere on the client. The client can directly load these resources locally without going through the network. Naturally, the speed is much faster.

Reference article: