[distributed] load balance 01 – basic knowledge of load balancing

Time:2021-12-1

Load balancing series topics

01 – load balancing Basics

02 consistency hash principle

03 Java implementation of consistent hash algorithm

04 – Java implementation of load balancing algorithm

load balancing

Load balancing is a key component of highly available network infrastructure. It is usually used to distribute the workload to multiple servers to improve the performance and reliability of websites, applications, databases or other services.

Traditional architecture

[distributed] load balance 01 - basic knowledge of load balancing

Here, the user is directly connected to the web server. If the server goes down, the user will naturally have no way to access it.

In addition, if many users try to access the server at the same time and exceed the limit they can handle, the loading speed will be slow or they will not be able to connect at all.

Introducing load balancing

This failure can be mitigated by introducing a load balancer and at least one additional web server in the back end.

Generally, all back-end servers will provide the same content, so that users can receive consistent content no matter which server responds.

[distributed] load balance 01 - basic knowledge of load balancing

As can be seen from the figure, the user accesses the load balancer, and then the load balancer forwards the request to the back-end server.

In this case, the single point of failure is now transferred to the load balancer.

This can be alleviated by introducing a second load balancer, but before discussing, let’s discuss the working mode of the next load balancer.

Advantages of load balancing

(1) High performance: load balancing technology distributes the services to multiple devices or links in a more balanced way, thus improving the performance of the whole system;

(2) Scalability: load balancing technology can easily increase the number of devices or links in the cluster and meet the growing business needs without reducing the service quality;

(3) High reliability: single or even multiple equipment or link failures will not cause service interruption, which improves the reliability of the whole system;

(4) Manageability: a large number of management groups are concentrated on the equipment using load balancing technology. The equipment cluster or link cluster only needs to maintain the passed configuration;

(5) Transparency: for users, the cluster is equal to one or more devices or links with high reliability and high performance. Users do not perceive or care about the specific network structure. Increasing or reducing the number of devices or links will not affect normal services.

Reverse proxy and load balancing

Reverse proxy is a method of load balancing.

Reverse proxy

Let’s talk about reverse agency first. When the user requests, he first sends the request to the proxy server, then the proxy server requests the real server according to the algorithm, and finally returns it to the user.

This approach, one is to improve security; The second is to share users’ requests through multiple real servers to realize load balancing.

load balancing

Let’s talk about load balancing.

The emergence of load balancing is to reduce the pressure of a single server as much as possible through horizontal expansion.

Common web level load balancing schemes include hardware F5, nginx agent, LVS, load balancing services of various cloud providers (such as AWS’s ELB service), etc.

Load balancing is generally followed by servers that actually provide services. For example, through ELB services, traffic can be evenly shared, so as to reduce the pressure of single server.

Since the load balancing layer is added, the single point problem should be considered when using a scheme alone.

The server responsible for load balancing fails to withstand the pressure, goes down, and the service is unavailable.

Therefore, nginx and LVS shall be configured with multiple agents as much as possible to failover and fault alarm, so as to deal with the problems of the proxy server in time.

ELB is a service provided by Amazon. There are hundreds or even thousands of machines at the bottom of its implementation, so imagine it as a proxy cluster.

The above is the general difference. The specific implementation needs to be combined with the actual business situation.

What is the difference between layer 4 and layer 7 load balancing?

Load balancing is divided into four layers of load balancing and seven layers of load balancing.

Four layer load balancing works in the transport layer of OSI model. Its main work is forwarding. After receiving the traffic from the client, it forwards the traffic to the application server by modifying the address information of the packet.

Layer 7 load balancing works in the application layer of OSI model. Because it needs to analyze the application layer traffic, layer 7 load balancing also needs a complete TCP / IP protocol stack after receiving the traffic from the client.

Layer 7 load balancing will establish a complete connection with the client, analyze the request traffic of the application layer, select an application server according to the scheduling algorithm, and establish another connection with the application server to send the request. Therefore, the main work of layer 7 load balancing is the agent.

Differences in technical principles

four layers

The so-called four layer load balancing is to determine the final internal server through the target address and port in the message and the server selection mode set by the load balancing equipment.

Taking the common TCP as an example, when receiving the first syn request from the client, the load balancing device selects the best server in the above way, modifies the target IP address in the message (changed to the back-end server IP) and forwards it directly to the server.

TCP connection establishment, that is, three handshakes, is established directly between the client and the server. The load balancing device only plays a forwarding action similar to that of the router.

In some deployment cases, in order to ensure that the packets returned by the server can be correctly returned to the load balancing device, the original source address of the message may be modified while forwarding the message.

[distributed] load balance 01 - basic knowledge of load balancing

Seventh floor

The so-called seven layer load balancing, also known as “content exchange”, is to determine the final internal server through the truly meaningful application layer content in the message and the server selection mode set by the load balancing device.

Taking the common TCP as an example, if the load balancing device wants to select the server according to the real application layer content, it can only proxy the final server to establish a connection with the client (three handshakes) before it can receive the message of the real application layer content sent by the client, and then according to the specific fields in the message, In addition, the server selection method set by the load balancing device determines the final internal server.
 
In this case, the load balancing device is more similar to a proxy server. Load balancing and front-end clients and back-end servers will establish TCP connections respectively.

Therefore, from the perspective of this technical principle, the seven layer load balancing obviously has higher requirements for load balancing equipment, and the processing capacity of the seven layer is bound to be lower than the deployment mode of the four layer mode.

So why do you need seven layers of load balancing?

Requirements of application scenarios

The advantage of seven layer application load is to make the whole network more convenientIntellectualization

You can basically understand the advantages of this method by referring to our previous introduction to optimization for HTTP applications.

For example, the user traffic accessing a website can forward the request for pictures to a specific picture server through seven layers, and cache technology can be used; Requests for text classes can be forwarded to a specific text server and compression technology can be used.

Of course, this is only a small case of seven layer application. From the technical principle, this method can modify the request of the client and the response of the server in any sense, which greatly improves the application system’s performance in the network layerflexibility

Many functions deployed in the background (such as nginx or APACHE) can be moved forward to the load balancing device, such as header rewriting in customer requests, keyword filtering or content insertion in server responses.

Another feature that is often mentioned isSecurity

The most common SYN Flood attack in the network is that hackers control many source clients and use false IP addresses to send syn attacks to the same target. Usually, this attack will send a large number of syn messages and exhaust relevant resources on the server to achieve the purpose of denial of service (DOS).

It can also be seen from the technical principle that under the four-tier mode, these syn attacks will be forwarded to the back-end server; In the seven layer mode, these syn attacks naturally end on the load balancing device and will not affect the normal operation of the background server. In addition, the load balancing device can set a variety of strategies at the seven layer level to filter specific messages, such as specific attack means at the application level such as SQL injection, so as to further improve the overall security of the system from the application level.

The current 7-layer load balancing mainly focuses on the widely used HTTP protocol, so its application scope is mainly B / S-based systems such as many websites or internal information platforms.

Layer 4 load balancing corresponds to other TCP applications, such as ERP systems developed based on C / s.

Problems to be considered in seven layer application.

Whether it is really necessary, the seven layer application can indeed improve the flow intelligence. At the same time, it will inevitably bring problems such as complex equipment configuration, increased load balancing pressure and complexity of troubleshooting. When designing the system, it is necessary to consider the hybrid situation of simultaneous application of four layers and seven layers.

Whether it can really improve security. For example, SYN Flood attack, the seven layer mode does shield these traffic from the server, but the load balancing device itself must have strong anti DDoS ability. Otherwise, even if the server is normal and the load balancing device as the central scheduling fails, the whole application will collapse.

Whether there is enough flexibility. The advantage of seven layer application is that it can make the traffic of the whole application intelligent, but the load balancing device needs to provide perfect seven layer functions to meet the application-based scheduling of customers according to different situations. The simplest assessment is whether it can replace the scheduling function on servers such as background nginx or Apache. A load balancing device that can provide a seven layer application development interface allows customers to set functions arbitrarily according to their needs, which is really possible to provide strong flexibility and intelligence.

How to choose

How does the load balancer select the back-end server to forward?

determinant

The load balancer generally determines which server to forward the request to based on two factors.

First, ensure that the selected server can respond to the request, and then select from the health pool according to the pre configured rules.

Because the load balancer should only select the back-end server that can respond normally, there needs to be a method to judge whether the back-end server is “healthy”.

In order to monitor the health of the back-end server, the health check service will periodically try to connect to the back-end server using the protocol and port defined by the forwarding rule.

If the server fails the health check, it will be removed from the pool to ensure that the traffic will not be forwarded to the server until it passes the health check again.

Load balancing algorithm

The load balancing algorithm determines which healthy servers on the back end will be selected.

1. Random algorithm

Random random, set random probability by weight.

The probability of collision on a section is high, but the larger the adjustment amount is, the more uniform the distribution is, and the weight is relatively uniform after being used according to the probability, which is conducive to the dynamic adjustment of the provider’s weight.

2. Polling and weighted polling

Round robin this algorithm is most suitable when the processing capacity of each server in the server cluster is the same and the processing capacity of each business is not different.

Rotation: set the rotation ratio according to the weight after the Convention. There is a problem of slow providers accumulating requests. For example, the second machine is very slow, but it does not hang up. When the request is transferred to the second machine, it is stuck there. Over time, all requests are stuck to the second machine.

Weighted round robin is an algorithm that adds a certain weight to each server in the polling.

For example, if server 1 weighs 1, server 2 weighs 2, and server 3 weighs 3, the order is 1-2-2-3-3-3-1-2-2-3-3 –

3. Minimum connection and weighted minimum connection

(1) Least connections is an algorithm that communicates with the server with the least number of processing connections (sessions) among multiple servers. Even if the processing capacity of each server is different and the processing capacity of each business is different, the load of the server can be reduced to a certain extent.

(2) Weighted least connection is an algorithm that adds weight to each server in the least connection algorithm. The algorithm allocates the number of processing connections to each server in advance, and transfers the client request to the server with the least number of connections.

Handle according to the load of the system.

4. Hash algorithm

  • Normal hash
  • Consistency hash consistency hash. Requests with the same parameters are always sent to the same provider.

When a provider hangs up, the requests originally sent to the provider are shared among other providers based on the virtual node, without causing drastic changes.

5. IP address hash

An algorithm that uniformly forwards packets from the same sender (or packets sent to the same destination) to the same server by managing the hash of sender IP and destination IP addresses.

When the client has a series of services to process and has to communicate repeatedly with a server, the algorithm can take the stream (session) as the unit to ensure that the communication from the same client can always be processed in the same server.

6. URL hash

An algorithm that forwards requests sent to the same URL to the same server by managing the hash of client request URL information.

Solve single point of failure

Finally, to solve the single point of failure problem of the load balancer, you can connect the second load balancer to the first one to form a cluster.

PS: load balancing itself should not become a single point bottleneck.

[distributed] load balance 01 - basic knowledge of load balancing

When the primary load balancer fails, the user request needs to be transferred to the second load balancer.

Floating IP

Because DNS changes usually take a long time to take effect, it is necessary to flexibly solve IP address remapping methods, such as floating IP.

In this way, the domain name can remain associated with the same IP, and the IP itself can move between servers.

Implementation of load balancing (DNS > Data Link Layer > IP layer > HTTP layer)?

DNS domain name resolution load balancing (latency)

[distributed] load balance 01 - basic knowledge of load balancing

Using DNS to handle domain name resolution requests while load balancing is another common scheme.

Configure multiple a records in DNS server, such as: www.mysite.com in a 114.100.80.1, www.mysite.com in a 114.100.80.2, www.mysite.com in a 114.100.80.3

Each domain name resolution request will calculate a different IP address return according to the load balancing algorithm, so that multiple servers configured in a record form a cluster and can realize load balancing.

The advantage of DNS domain name resolution load balancing is that it gives the load balancing work to DNS, omitting the trouble of network management. The disadvantage is that DNS may cache a records and is not controlled by the website.

In fact, large websites always use DNS domain name resolution as the first level load balancing means, and then do the second level load balancing internally.

Data link layer load balancing (LVS)

[distributed] load balance 01 - basic knowledge of load balancing

Data link layer load balancing refers to modifying MAC address in data link layer of communication protocol for load balancing.

This data transmission mode is also called triangular transmission mode. In the process of load balancing data distribution, only the destination MAC address is modified without modifying the IP address. By configuring the virtual IP of all machines in the real physical server cluster to be the same as the IP address of the load balancing server, load balancing is achieved. This load balancing mode is also called direct routing (DR)

In the above figure, after the user request reaches the load balancing server, the load balancing server modifies the destination MAC address of the requested data to be the MAC address of the real web server without modifying the packet target IP address, so the data can reach the target web server normally, After processing the data, the server can directly reach the user browser through the network management server rather than the load balancing server.

Link layer load balancing using triangular transmission mode is the most widely used load balancing method for large websites.

The best link layer load balancing open source product on Linux platform is LVS (Linux Virtual Server).

IP load balancing (SNAT)

[distributed] load balance 01 - basic knowledge of load balancing

IP Load Balancing: that is, inThe network layer performs load balancing by modifying the request target address

After the user requests the data packet to reach the load balancing server, the load balancing server obtains the network data packet in the operating system kernel, calculates a real web server address according to the load balancing algorithm, and then modifies the IP address of the data packet to the real web server address, which does not need to be processed by the user process.

After the real web server completes processing, the corresponding data packet returns to the load balancing server, and the load balancing server modifies the source address of the data packet to its own IP address and sends it to the user browser.

The key here is how the corresponding data packets of the real web server are returned to the load balancing server. One is that the load balancing server modifies the source address while modifying the destination IP address, and changes the source address of the data packet to its own IP, that is, source address translation (SNAT). The other is to use the load balancing server as the gateway server of the real physical server at the same time, In this way, all data will reach the load balancing server.

IP load balancing completes data distribution in the kernel process, which has better processing performance than reverse proxy balancing.

However, since all data packets of request response need to pass through the load balancing server, load balancing is very importantNetwork card bandwidth becomes the bottleneck of the system

HTTP redirect load balancing (rare)

[distributed] load balance 01 - basic knowledge of load balancing

HTTP redirection server is an ordinary application server. Its only function is to calculate a real server address according to the user’s HTTP request, write the real server address into the HTTP redirection response (response status 302) and return it to the browser, and then the browser automatically requests the real server.

The advantage of this load balancing scheme is that it is relatively simple. The disadvantage is that the browser needs to request twice each time before the server can complete an access, and the performance is poor;

Using http302 response code redirection may be that the search engine judges SEO cheating and reduces the search ranking.

The processing capacity of the redirection server itself may become a bottleneck. Therefore, this scheme is rarely used in practice.

Reverse proxy load balancing (nginx)

[distributed] load balance 01 - basic knowledge of load balancing

The traditional proxy server is located at the browser side, and the proxy browser sends HTTP requests to the Internet. The reverse proxy server is located on the side of the website machine room, and the proxy website web server receives HTTP requests.

The role of reverse proxy is to protect website security. All internet requests must pass through the proxy server, which is equivalent to establishing a barrier between the web server and possible network attacks.

In addition, the proxy server can also configure caching to speed up Web requests. When the user accesses the static content for the first time, the static memory is cached on the reverse proxy server, so that when other users access the static content, they can directly return from the reverse proxy server, accelerate the response speed of web requests and reduce the load pressure on the web server.

In addition, the reverse proxy server can also realize the function of load balancing.

[distributed] load balance 01 - basic knowledge of load balancing

Because the reverse proxy server forwards requests at the HTTP protocol level, it is also called application layer load balancing.

The advantage is simple deployment, but the disadvantage is that it may become the bottleneck of the system.

PS: the bottleneck should not be. It can be made into a cluster.

Expand reading

Common implementation of load balancing DNS LVS nginx

F5 hardware load balancing

Nginx software reverse agent load balancing

Haproxy software load balancer

High availability + high concurrency + load balancing architecture

Summary

This section mainly describes why load balancing is needed and the common strategies of load balancing.

Let’s learn later

[distributed] load balance 01 - basic knowledge of load balancing

reference material

Capacity design and load balancing

What is the difference between reverse proxy and load balancing?

What is load balancing?

Load balancing Basics

load balancing