Comparison of three mainstream software load balancers (LVS vs nginx vs haproxy)

Time:2021-12-15

lvs

1. Strong load resistance. Strong load resistance and high performance, up to 60% of F5 hardware; The consumption of memory and CPU resources is relatively low
2. Working in the network layer 4, it is forwarded through the VRRP protocol (only for distribution). The specific traffic is handled by the Linux kernel, so there is no traffic.
2. Good stability and reliability, with perfect hot standby scheme; (e.g. LVS + kept)
3. It has a wide range of applications and can load balance all applications;
4. Regular processing is not supported, and dynamic and static separation cannot be performed.
5. Support load balancing algorithms: RR (round robin), WRR (weighted round robin), LC (minimum connection), WLC (minimum weight connection)
6. Complex configuration, great dependence on the network and high stability.

nginx

1. Working on layer 7 of the network, you can do some diversion strategies for HTTP applications, such as domain name and directory structure;
2. Nginx has little dependence on the network. Theoretically, if it can ping, it can perform the load function;
3. The installation and configuration of nginx are relatively simple and easy to test;
4. It can also bear high load pressure and is stable. Generally, it can support more than 10000 concurrent operations;
5. The health check of the back-end server only supports the detection through the port, not through the URL.
6. The asynchronous processing of requests by nginx can help the node server reduce the load;
7. Nginx can only support HTTP, HTTPS and email protocols, so it has a small scope of application.
8. The direct holding of session is not supported, but it can be through IP_ Hash to solve The support for big request header is not very good,
9. Support load balancing algorithms: Round Robin, weight Round Robin and IP hash
10. Nginx can also serve as a web server, i.e. cache function.

haproxy

1. Support two proxy modes: TCP (layer 4) and HTTP (layer 7), and support virtual host;
2. It can supplement some shortcomings of nginx, such as session retention, cookie guidance and so on
3. Support URL detection and back-end server problem detection will be very helpful.
4. More load balancing strategies such as dynamic round robin, weighted source hash, weighted URL hash and weighted parameter hash have been implemented
5. In terms of efficiency, haproxy has better load balancing speed than nginx.
6. Haproxy can load balance mysql, detect and load balance backend DB nodes.
9. Support load balancing algorithms: Round Robin, weight Round Robin, source, RI and RDP cookie
10. You cannot be a web server, that is, a cache.

The three mainstream software load balancers are applicable to business scenarios

1. At the initial stage of website construction, nigix / haproxy can be selected as reverse proxy load balancing (or load balancing can not be selected if the traffic is small), because its configuration is simple and its performance can meet the general business scenarios. If considering that the load balancer has a single point problem, nginx + kept / haproxy + kept can be used to avoid the single point problem of the load balancer itself.
2. After the website has developed to a certain extent, LVS can be used to improve stability and forwarding efficiency. After all, LVS is more stable and forwarding efficiency than nginx / haproxy. However, the maintenance of LVS will have higher requirements for maintenance personnel and higher investment costs.

Note: comparison between niginx and haproxy: niginx supports seven layers, with the largest number of users and reliable stability. Haproxy supports four layers and seven layers, supports more load balancing algorithms, supports session saving, etc. The specific selection depends on the use scenario. At present, the number of users of haproxy is also increasing because it makes up for some shortcomings of niginx

Several important factors to measure the quality of load balancer

1. Session rate: number of requests processed per unit time
2. Session concurrency capability: concurrent processing capability
3. Data rate: data processing capacity
According to the official test statistics, the maximum number of requests processed by haproxy per unit time is 20000, 40000-50000 concurrent connections can be maintained at the same time, and the maximum data processing capacity is 10Gbps. Based on the above, haproxy is a load balancing and reverse proxy server with superior performance.

Summarize the main advantages of haproxy

1、 It is free and open source, and its stability is also very good. This can be seen from some small projects I have done. Single haproxy also runs well, and its stability is comparable to that of LVS;

2、 According to the official documents, haproxy can run full 10Gbps new benchmark of haproxy at 10Gbps using myricom’s 10GbE NICs (myri-10g PCI Express), which is also amazing as software level load balancing;

3、 Haproxy can be used as mysql, mail or other non web load balancing. We often use it as MySQL (read) load balancing;

4、 It comes with a powerful page to monitor the server status. In the actual environment, we use Nagios for email or SMS alarm, which is also one of the reasons why I like it very much;

The following configuration files for haproxy are explained in detail:

vim /etc/haproxy/haproxy.cfg
Global # global parameter settings
    log         127.0. 0.1 Local2 # log syntax: log < address_ 1> [max_level_1] # global log configuration, using the log keyword,
                                                                     Specifies the use of 127.0 zero point one
                                                                     The local0 log device in the syslog service on records logs with a log level of info
    Chroot / var / lib / haproxy # changes the current working directory
    pidfile     /var/run/haproxy. PID # current process ID file
    Maxconn 4000 # maximum connections
    User haproxy # user
    Group haproxy # belongs to
    Daemon # runs haproxy as a daemon
    stats socket /var/lib/haproxy/stats
defaults
    Mode http # the default mode is mode {TCP | http | health}. TCP is layer 4 and HTTP is layer 7. Health will only return OK
    Log global # applies the global log configuration
    Option httplog # enables logging of HTTP requests. The default haproxy logging is not to log HTTP requests
                                                                 
    Option dontlognull # if this item is enabled, empty connections will not be recorded in the log. The so-called empty connection is the load balancer upstream
                                                                   Or, in order to detect whether the service is alive and available, the monitoring system needs to regularly connect or obtain a service
                                                                  A fixed component or page, or detecting whether the scanning port is listening or open, is called an empty connection;
                                                                  It is indicated in the official document that if there is no other load balancer upstream of the service, it is recommended not to use it
                                                                   This parameter, because malicious scanning or other actions on the Internet will not be recorded
    Option HTTP server close # actively close the HTTP channel after each request is completed
    option forwardfor       except 127.0. 0.0/8 # if the application on the server wants to record the IP address of the client initiating the request, it needs to be in haproxy
                                                                            Configure this option on, so that haproxy will send the IP information of the client to the server and send it to the server over HTTP
                                                                            Add the "x-forwarded-for" field to the request. Enable x-forward-for in requests
                                                                            Insert the client IP into the header and send it to the back-end server, so that the back-end server can obtain the real IP of the client. 
    Option redispatch # when cookies are used, haproxy will insert the serverid of the requested backend server into the
                                                                            In the cookie to ensure the session persistence of the session; At this time, if the back-end server goes down
                                                                            However, the client's cookies will not be refreshed. If this parameter is set, the client's cookies will be deleted
                                                                            It is required to forcibly direct to another back-end server to ensure the normal operation of the service.
    Retries 3 # defines the number of failed reconnections to the back-end server. If the number of failed connections exceeds this value, the corresponding back-end server will be
                                                                  Server marked as unavailable
    Timeout HTTP request 10s #http request timeout
    Timeout queue 1m # the timeout period of a request in the queue
    Timeout connect 10s # connection timeout
    Timeout client 1m # client timeout
    Timeout server 1m # server timeout
    Timeout HTTP keep alive 10s # sets the timeout of HTTP keep alive
    Timeout check 10s # detection timeout
    Maxconn 3000 # maximum number of connections available per process
Frontend main *: 80 # listening address is 80
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js
    use_backend static          if url_static
    default_ backend              my_ Webserver # defines a named my_ App front end. The request for is forwarded to the backend here
Backend static # uses static dynamic separation (if url_path matches. Jpg. GIF. PNG. CSS. JS static file)
                                                                            Access this backend)
    Balance roundrobin # load balancing algorithm (#banlance roundrobin polling, balance source saves session value,
                                                                           Support static RR, leastconn, first, URI and other parameters)
    server      static 127.0. 0.1:80 check # static files can be deployed locally (or on other machines or squid cache servers)
backend my_ Webserver # defines a named my_ The back-end part of webserver. PS: here my_ Web server is just a
                                                                            It's just a custom name, but it needs to be consistent with the configuration item default in frontend_ The backend value is consistent
    Balance roundrobin # load balancing algorithm
    server   web01 172.31. 2.33:80 check inter 2000 fall 3 weight 30 # defined multiple backend
     server   web02 172.31. 2.34:80 check inter 2000 fall 3 weight 30 # defined multiple backend
     server   web03 172.31. 2.35:80 check inter 2000 fall 3 weight 30 # defined multiple backend

Recommended Today

IOS imitates various ways of wechat long press to identify QR code

Reference:https://github.com/nglszs/BCQRcode Mode 1: #import <UIKit/UIKit.h> @interface ViewController : UIViewController @end ************** #import “ViewController.h” @interface ViewController () @end @implementation ViewController – (void)viewDidLoad { [super viewDidLoad]; self. Title = @ “QR code”; UIBarButtonItem *leftBtn = [[UIBarButtonItem alloc] Initwithtitle: @ “build” style:UIBarButtonItemStylePlain target:self action:@selector(backView)]; self.navigationItem.leftBarButtonItem = leftBtn; UIBarButtonItem *rightBtn = [[UIBarButtonItem alloc] Initwithtitle: @ “scan” style:UIBarButtonItemStylePlain target:self action:@selector(ScanView)]; […]