One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]


After the birth of the world wide web in 1989, HTTP quickly became the dominant application layer protocol in the world. Today, almost any scenario uses the HTTP protocol more or less.

In the history of more than 30 years, HTTP protocol itself has made great development. At the same time, some major changes are also brewing. These evolutions make the protocol more expressive and better performance, and can better meet the changing application needs. This article reviews and looks forward to the history and future of HTTP.

  • HTTP/0.9
  • HTTP/1.0
  • HTTP/1.1
  • HTTP/2
  • HTTP/3


Http / 0.91991In, it was the first version of HTTP protocol with very simple structure:

  • The requester only supports get requests
  • The response end can only return HTML text data
GET /index.html
    Hello World

The schematic diagram of the request is as follows:

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]

You can see that http / 0.9 can only send get requests, and each request creates a separate TCP connection. The response end can only return data in HTML format. After the response is completed, the TCP request is disconnected.

Although such a request method can meet the use needs at that time, it still exposes some problems.

Http / 0.9 pain points:

  • Unique request method and unique return format
  • TCP connections cannot be reused


Http / 1.0 was born in1996In, it added HTTP header field on the basis of HTTP / 0.9, which greatly expanded the use scenario of HTTP. This version of HTTP can not only transmit text, but also transmit images, videos and binary files, which has laid a solid foundation for the rapid development of the Internet.

The core features are as follows:

  • The HTTP protocol version is added at the requesting end and the status code is added at the responding end.
  • The request method adds post and head.
  • Add header and response fields on the request side.

    • Content type allows response data to be more than hypertext.
    • Expires, last modified cache headers.
    • Authorization authentication.
    • Connection: keep alive supports long connections, but it is non-standard.
GET /mypage.html HTTP/1.0
User-Agent: NCSA_Mosaic/2.0 (Windows 3.1)
200 OK
Date: Tue, 15 Nov 1994 08:12:31 GMT
Server: CERN/3.0 libwww/2.17
Content-Type: text/html

    Hello World

The schematic diagram of the request is as follows:

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]

You can see that http / 1.0 extends the request method and response status code, and supports the definition of HTTP header fieldsContent-TypeHead, we can transmit data in any format. At the same time, it can be seen that http / 1.0 is still a request corresponding to a TCP connection and cannot form multiplexing.

Http / 1.0 pain points:

  • TCP connections cannot be reused.
  • The HTTP queue head is blocked. The next HTTP request can be initiated only after the response of an HTTP request is completed.
  • A server can only provide one HTTP service.


Http / 1.1 was born in1999In, it further improved the HTTP protocol and has been used for more than 20 years. Today, it is still the most widely used HTTP version.

The core features are as follows:

  • Persistent connection.

    • Http / 1.1 opens a persistent connection by default and does not close immediately after the TCP connection is established, allowing multiple HTTP requests to be reused.
  • Pipeline technology.

    • In http / 1.1, multiple HTTP requests can be sent in batches without queuing, which solves the problem of HTTP queue head blocking. However, HTTP requests sent in batches must return responses in the order they are sent, which is equivalent to half the problem is solved, and it is still not the best experience.
  • Support response blocking.

    • Http / 1.1 implements streaming rendering. The response end can not return all data at once. It can split the data into multiple modules to generate a piece of data and send a piece of data. In this way, the client can process the data synchronously, reduce the response delay and reduce the white screen time.
    • The implementation of bigpipe is based on this feature, specifically through definitionTransfer-EncodingHead to achieve.
  • Add host header.

    • Http / 1.1 implements the virtual host technology, which divides a server into several hosts, so that multiple websites can be deployed on one server.
    • Multiple HTTP services can be supported by configuring the domain name and port number of the host:Host: <domain>:<port>
  • Other extensions.

    • Add cache control and e-tag cache headers.
    • Add put, patch, head, options and delete request methods.
GET /en-US/docs/Glossary/Simple_header HTTP/1.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:50.0) Gecko/20100101 Firefox/50.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
200 OK
Connection: Keep-Alive
Content-Encoding: gzip
Content-Type: text/html; charset=utf-8
Date: Wed, 20 Jul 2016 10:55:30 GMT
Etag: "547fa7e369ef56031dd3bff2ace9fc0832eb251a"
Keep-Alive: timeout=5, max=1000
Last-Modified: Tue, 19 Jul 2016 00:59:33 GMT
Server: Apache
Transfer-Encoding: chunked
Vary: Cookie, Accept-Encoding

    Hello World

The schematic diagram of the request is as follows:

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]

It can be seen that http / 1.1 can initiate multiple requests in parallel and reuse the same TCP connection, which improves the transmission efficiency. However, the response end can only return in the order of sending. Therefore, many browsers will open up to 6 connections for each domain name to reduce HTTP queue head blocking by increasing the queue.

Http / 1.1 pain points:

  • The HTTP queue head blocking has not been completely solved, and the response end must return according to the HTTP sending order. If the response at the top of the order is particularly time-consuming, all the responses at the bottom of the order will be blocked.


Http / 2 was born in2015In, its biggest feature is all in binary. Based on the binary characteristics, it deeply optimizes the HTTP transmission efficiency.

Http / 2 divides an HTTP request into three parts:

  • Frame: a piece of binary data, which is the smallest unit of HTTP / 2 transmission.
  • Message: one or more frames corresponding to a request or response.
  • Data stream: a bidirectional byte stream within an established connection that can carry one or more messages.

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]

As can be seen in the figure, there are multiple data streams on a TCP connection. One data stream carries two-way messages. A message contains multiple frames. Each frame has a unique identification and points to the data stream. Frames from different data streams can be sent alternately, and then reassembled according to the data stream identifier of each frame header, so as to realize data transmission.

The core features of HTTP / 2 are as follows:

  • Request Bundle Priority

    • When multiple HTTP requests are sent at the same time, multiple data streams will be generated. There is a priority ID in the data stream, and the server can determine the priority of the response according to this ID.
  • Multiplexing

    • During TCP transmission, there is no need to respond in the order of HTTP transmission, and the transmission can be staggered. The receiving end can find the corresponding stream according to the identifier of the frame header, and then recombine to obtain the final data.
  • Server side push

    • Http / 2 allows the server to actively send resources to the client without request and cache them in the client to avoid secondary requests.
    • When requesting a page in http / 1.1, the browser will first send an HTTP request, then get the HTML content of the response and start parsing. If there is any<script>Tag, an HTTP request will be initiated again to obtain the corresponding JS content. Http / 2 can return the required JS, CSS and other contents to the client while returning HTML. When the browser parses the corresponding tag, there is no need to initiate the request again.
  • Head compression

    • The header field of HTTP / 1.1 contains a lot of information, and must be carried with each request, which takes up a lot of bytes.
    • In http / 2.0, both communication parties cache a header field table, such asContent-Type:text/htmlStore it in the index table. If you want to use this header later, you only need to send the corresponding index number.

In addition, although http / 2 does not stipulate that TLS security protocol must be used, all web browsers that implement http / 2 only support websites configured with TLS, which is to encourage everyone to use more secure HTTPS.

The schematic diagram of the request is as follows:

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]

It can be seen that when sending a request in http / 2, there is no need to queue for sending or returning, which completely solves the problem of HTTP queue head blocking. Pain points such as header information and resource cache have also been optimized, which seems to be a perfect scheme.

Http / 2 has been optimized to the extreme in the HTTP + TCP architecture. If you want to continue to optimize, you can only start with this architecture.

In fact, the core of TCP is not to ensure the reliability of TCP transmission layer.

  • TCP also has queue head blocking. TCP uses the serial number to identify the sequence of data during transmission. Once a data is lost, the subsequent data needs to wait for the data to be retransmitted before proceeding to the next step.
  • TCP needs three handshakes for each establishment and four waves for releasing the connection, which virtually increases the transmission time.
  • TCP has congestion control, built-in slow start, congestion avoidance and other algorithms, and the transmission efficiency is not stable.

If you want to solve these problems, you need to replace TCP, which is also the solution of HTTP / 3. Let’s move on.


Http / 3 is still in the draft stage. Its main feature is to optimize the transport layer and replace TCP with quic, which completely avoids the efficiency of TCP transmission.

Quic is a UDP based multiplexing transmission protocol proposed by Google. The concept of TCP handshake and the reliability of HTTP connection are not required at the level of three times. At the device support level, only the client and server applications need to support quic protocol, without the restrictions of operating system and intermediate devices.

The core features of HTTP / 3 are as follows:

  • Faster transport layer connections.

    • Http / 3 is based on quic protocol and can establish a connection with 0-rtt, while TCP needs 3-RTT to establish a connection.

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]

  • Transport layer multiplexing.

    • The quic protocol is used in the HTTP / 3 transport layer. During transmission, the data will be divided into multiple packet packets. Each packet packet can be sent independently and staggered without sending in order, which avoids TCP queue head blocking.

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]

The streams in the above figure are independent of each other. If a pakcet is lost in stream2, the normal reading of stream3 and stream4 will not be affected.

  • Improved congestion control.

    • Monotonically increasing packet number。 In TCP, every packet has a sequence number identifier (SEQ). If the receiver does not receive the packet with the identifier of SEQ after timeout, it will request to resend the packet. If the timeout packet is also received at this time, it is impossible to distinguish which is the timeout packet and which is the retransmission packet. The packet number of each packet in quic is monotonically increasing. The retransmitted packet number must be greater than the timeout packet number, so it can be distinguished.
    • Regenerating is not allowed。 In TCP, if the memory of the receiver is insufficient or the buffer overflows, the received packets may be discarded. This behavior has a great interference on data retransmission, which is explicitly prohibited in quic. In quic, as long as a packet is confirmed, it must be received correctly.
    • More ack blocks。 Generally speaking, the receiver will send an ACK ID after receiving the sender’s message, indicating that the data has been received. However, it is inefficient to send an ACK every time a data is received. It is usually to reply to the ACK uniformly after receiving multiple data. TCP will return an ACK for every three packets received, while quic can return an ACK only after receiving 256 packets at most. In the network with serious packet loss rate, more ack blocks can reduce the amount of retransmission and improve the network efficiency.
    • Ack Delay。 When calculating RTT, TCP does not consider the delay of data processing by the receiver, as shown in the figure below. This delay is ack delay. Quic considers this delay to make the calculation of RTT more accurate.

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]

  • Optimized flow control.

    • TCP controls traffic through a sliding window. If a packet is lost, the sliding window cannot continue sliding across the lost packet, but will get stuck in the lost position and wait for data retransmission.
    • The core of quic flow control is:Too many connections cannot be established to avoid the failure of the response end to process; One connection cannot occupy a large amount of resources, and other connections cannot have resources available。 For this purpose, quic flow control is divided into two levels: connection level and stream level.

      • In stream level flow control,Receive window = maximum receive window - received data
      • In connection level flow control,Receiving window = stream1 receiving window + stream2 receiving window +... + streamn receiving window
  • Encrypted authenticated message

    • Without any encryption and authentication, the TCP header is easy to be tampered, injected and eavesdropped by intermediate network devices in the transmission process.
    • Quic messages are encrypted and authenticated to ensure the security of data in the transmission process.
  • Connection migration

    • TCP connection is composed of (source IP, source port, destination IP and destination port). Once one of the four changes, the connection will not work. If we switch from 5g network to WiFi network, the IP address will change, and the TCP connection will naturally be broken.
    • Quic uses the 64 bit ID generated by the client to represent a connection. As long as the ID remains unchanged, the connection will be maintained without interruption.
  • Forward error correction mechanism

    • When sending data in quic, in addition to sending its own data packet, verification packet will also be sent to reduce retransmission caused by data loss.
    • For example:

      • The sender needs to send three packets. Quic calculates the XOR value of these three packets during transmission, and sends a verification packet separately, that is, a total of four packets are sent.
      • If a packet (non check packet) is lost during transmission, the content of the lost packet can be calculated from the other three packets.
      • Of course, this technology can only be used when one packet is lost. If multiple packets are lost, it can only be retransmitted.

It can be seen that quic has lost the burden of TCP and implemented a safe, efficient and reliable HTTP communication protocol based on UDP. With the characteristics of 0-rtt connection establishment, transport layer multiplexing, connection migration, improved congestion control and flow control, quic has achieved better results than http / 2 in most scenarios. Http / 3 is really expected in the future.

Thinking and summary

Through the development history of the Internet, this paper gradually introduces the core features of each version from http / 0.9 to http / 3, and finally summarizes them in one sentence.

  • Http / 0.9 implements the basic request response.
  • Http / 1.0 adds HTTP headers to enrich the types of transmission resources and lay the foundation for the development of the Internet.
  • Http / 1.1 adds persistent connection, pipelining and response blocking to improve HTTP transmission efficiency.
  • Http / 2 adopts binary transmission format, and brings the transmission efficiency to the extreme in HTTP + TCP architecture through HTTP multiplexing, header compression and server-side push.
  • Http / 3 replaces the transport layer with quic, and further improves the HTTP transmission efficiency through improved congestion control, flow control, 0-rtt connection establishment, transport layer multiplexing, connection migration and other characteristics.

It can be seen that from http / 1.1, the development direction of HTTP is to continuously improve the transmission efficiency. We look forward to the future HTTP to bring us a faster transmission experience.

One text concatenation http / [0.9 | 1.0 | 1.1 | 2 | 3]