In the last chapter, we built the simplest single service project. The single architecture is to put all functions in one project. However, when the number of visits is increasing, we can't afford to deploy only one environment. Is there any solution at this time?
If we go shopping in a supermarket, when the number of customers is small, the supermarket can only open one checkout channel to meet the needs. However, when the number of customers increases and there is only one checkout channel, the waiting time of customers will be too long. The simplest solution is to open more checkout channels.
There are similar problems in software architecture:
If the number of users, visits and data of the project is small, and one server is enough to support it, then one set of direct project deployment and direct access can be used. However, when the number of users and data is increasing, the number of visits (concurrency) is increasing, and one server is no longer able to support the business, multiple machines need to be used, Design high-performance clusters to deal with it.
So when I deploy multiple servers (if there are two here), how does the caller access them? How does the service balance the traffic? At this time, load balancing needs to be introduced.
Load balancing is to evenly forward the user’s access to the back-end server through a certain strategy; Load balancing can improve the service capacity and high availability of the system.
1. Load balancing classification
1.1 classification by type
1. DNS load balancing
The general principle is that when users access the domain name, they need to first resolve the domain name through DNS and find the corresponding IP address. In this process, the DNS server can return different IP addresses according to the user’s geographical location, so as to achieve load balancing and improve the user’s access speed.
2. Software load balancing
Software is used to realize traffic distribution, including load balancing based on transport layer, such as LVS, and application layer, such as nginx; Software load balancing is very simple to implement. It only needs to be deployed and configured on the server;
3. Hardware load balancing
Use hardware to realize load balancing, such as F5 (F5 network big IP), which is a network device with high performance and very expensive price.
1.2 classification by who loads
1. Server load balancing
The caller only accesses the load balanced IP, regardless of how many servers are behind.
Load balancing on the server is a bit like when we make a customer service call. Many people can call the same customer service number at the same time. We don’t care how many customer service are actually there or who answers the call.
2. Client load balancing
The server deploys multiple servers. The client knows the address of each server and accesses it evenly through certain routing rules, such as spring cloud ribbon. Of course, the load balancing of the client usually needs the cooperation of service registration and discovery.
Client load balancing is more like shopping and checkout in the supermarket. We can see that there are several checkout counters. We can choose which counter to checkout at ourselves.
We will introduce the client load balancing component ribbon in spring cloud in detail in later chapters.
1.3 classification according to network model
The most commonly used Network Model OSI model has a total of 7 layers:
1. Layer 2 load balancing
Load balancing based on data link layer; Let the load balancing server and the application server bind to the same virtual IP. The client makes a request through this virtual IP. After the load balancing server receives the request, it will allocate and forward it according to the MAC address.
2. Three layer load balancing
Load balancing based on network layer; Virtual IP is also adopted. However, after receiving the request, the load balancing server will distribute and forward according to the actual IP.
3. Four layer load balancing
Load balancing based on IP + port; Accept the request with IP + port and forward it to the application server in the background.
For example, in the TCP application instance, when the load balancing server receives the first sNY request (connection establishment request), it will find server a through the load balancing algorithm, modify the target IP in the message to the IP of server a, and then forward it to server a;
It’s like when we go to the bank to handle business, first get a number (establish a connection request), the bank’s call system will notify you: “please go to window 3 to handle business on No. 101” (forward your request to server 3). The little sister of window 3 really handles business.
In this process, the load balancing server is equivalent to a router.
4. Layer 7 load balancing
Load balancing based on virtual URL or host IP; The seventh layer is the application layer, which supports a variety of application protocols, such as HTTP, FTP, etc. the seventh layer load balancing server can select which application server to forward according to the really meaningful content in the request message and the load balancing algorithm; Therefore, the seven layer load balancing server is also called “content switch”;
For example, the seven layer load balancing server forwards the request of picture class to the picture server and the request of text content class to the application server;
It’s like when we go to the bank to handle business. When getting the number, the hall manager looks at your bank card. The ordinary card is given a B101 and the gold card is given an A101. If it’s financial management business, we will take you to the financial management window, which is equivalent to forwarding the request to different servers according to your specific business or bank card type (message).
In this process, the load balancing server is equivalent to a proxy server.
2. Common load balancing tools
Four layer load balancing; LVS is a high-performance and highly available load balancing server using Linux kernel cluster; The performance is relatively strong, and there is a complete dual machine hot standby scheme, such as LVS + kept; Because layer 4 load balancing only distributes requests, the IO performance of LVS will not be affected by traffic.
Seven layer load balancing; Because it is seven layers, some diversion strategies can be made for HTTP, and the regular rules of nginx are more powerful and flexible; The installation, configuration and testing of nginx are relatively simple; It can bear high load pressure, but it will be slightly worse than LVS; Nginx can also detect the operation of the back-end application server, and submit the wrong request to another service node according to the status code or timeout of processing the request;
Nginx supports HTTP, HTTPS, SMTP, POP3, IMAP and other protocols, and supports TCP protocol in higher versions.
Seven layer load balancing; In terms of performance, haproxy has better speed than nginx and is better than nginx in concurrent processing; Haproxy supports TCP protocol. For example, it can load balance MySQL read operations.
3. Common load balancing scheduling algorithms
3.1 polling method
Polling method is to distribute the requests to each server in turn according to the order;
The rotation training method is simple, efficient and easy to expand horizontally. However, it only strives for average and does not care about the actual load of each service; Therefore, if the performance of a server is not good, it is very likely to produce a barrel effect.
3.2 random method
Randomly allocate requests to each server. If the number of requests is enough, the actual effect will be close to the average allocation from the perspective of probability.
3.3 random polling method
Random method and polling method are combined to randomly find a server as the starting point, and then start polling to send requests（ Randomness is only reflected when looking for the first server, and the remaining work is the same as the rotation method)
3.4 source address Hashi method
Hash the IP address of the client to get a value x, and the number of servers is n. The result of X% n determines which server to access.
The address hashing method can make the same IP fall on the same server every time. In this way, the problem of session sharing does not need to be considered, but it may lead to uneven traffic distribution. In addition, when a server fails, the clients on the server will not be able to use and the high availability of the cluster can not be guaranteed.
3.5 weighted polling method
The weighted polling method is an improvement of the polling method. Because the configuration of each server is different, their pressure resistance is also different. Machines with high configuration can assign higher weights, so that they can process more requests;
The weighted polling method also takes the performance of the machine into account, and the cluster performance can be maximized.
3.6 weighted random method
Similar to the weighted polling method; I won’t repeat it here.
3.7 minimum connection number method
According to the number of connections of each server node, dynamically select the server with the least number of current connections to forward the request;
The minimum number of connections method adjusts according to the real-time state changes, makes maximum use of the resources of each machine, and improves the overall availability of the cluster; However, the complexity is also high, and the number of connections per server needs to be calculated.
3.8 fastest response speed method
Dynamically select the server with the fastest response speed to forward the request according to the response time (round-trip delay of the request) of each server node;
Similar to the minimum connection number method, the fastest response speed method is also dynamically adjusted, with finer control granularity and more effort; At the same time, the complexity is also high, and the response speed of each server needs to be calculated.
However, when the traffic is increasing and it is difficult to deploy only one environment, we can deploy multiple environments and distribute requests to different servers through load balancing to achieve the purpose of horizontal expansion. This is called cluster deployment in the architecture.
In this lesson, we learned:
Uncle who knows some code | original
[from single architecture to distributed architecture] this series of articles hope to introduce various problems encountered in the development of architecture, as well as the corresponding solutions, advantages and disadvantages in simple and straightforward language. Suitable for people: Students who want to engage in Java Web development should have a certain foundation of Java language; Novice programmers want to understand the popular middleware and framework for Java Web development; The technology stack has long been SSH and SSM, but programmers who want to change.