It’s not difficult to master haproxy from scratch!!!

Time:2021-6-5

It's not difficult to master haproxy from scratch!!!

What is haproxy

Haproxy is freeload balancing Software, can run on most of the mainstream Linux operating system.

Haproxy provides L4 (TCP) and L7 (HTTP) load balancing capabilities with rich functions. The community of haproxy is very active and the version is updated quickly (the latest stable version 1.7.2 was launched on January 13, 2017). Most importantly, haproxy has the performance and stability comparable to commercial load balancers.

becauseHAProxyAt present, it is not only the first choice of free load balancing software, but also the only choice.

The core function of haproxy

  • Load balancing: L4 and L7 modes support RR / static RR / LC / IP hash / URI hash / url_ PARAM Hash/HTTP_ Head hash and other rich load balancing algorithms
  • Health check: supports TCP and HTTP health check modes
  • Session persistence: for application clusters without session sharing, session persistence can be achieved through insert Cookie / rewrite Cookie / prefix cookie, as well as various hash methods mentioned above
  • SSL:HAProxyIt can parse the HTTPS protocol and decrypt the request to HTTP for back-end transmission
  • HTTP request rewriting and redirection
  • Monitoring and Statistics: haproxy provides a web-based statistical information page to show health status and traffic data. Based on this function, users can develop monitoring programs to monitor the status of haproxy

Key features of haproxy

performance

  • It adopts single thread, event driven and non blocking model to reduce the consumption of context switching, and can process hundreds of requests in 1ms. And each session only takes up a few KB of memory.
  • A large number of fine performance optimizations, such as O (1) complexity event checker, delayed update technology, single buffering, zero copy forwarding and so on, make haproxy occupy very low CPU resources under medium load.
  • Haproxy makes great use of the functional characteristics of the operating system, so that it can play a very high performance in processing requests. Generally, haproxy only takes 15% of the processing time, and the remaining 85% is completed in the kernel layer of the system.
  • The author of haproxy conducted a test with version 1.4 eight years ago (2009). The processing capacity of a single haproxy process exceeded 100000 requests per second, and easily occupied 10Gbps of network bandwidth.

stability

As a program that is recommended to run in single process mode, haproxy is very strict on stability. According to the author, haproxy has never had a bug that would cause it to crash in 13 years. Once haproxy is successfully started, it will not crash unless the operating system or hardware fails (I think there may be some exaggeration).

As mentioned above, most of the work of haproxy is done in the operating system kernel, so the stability of haproxy mainly depends on the operating system. The author suggests using 2.6 or 3. X Linux kernel to fine tune the sysctls parameters and ensure that the host has enough memory. In this way, haproxy can run stably at full load for several years.

Personal suggestions:

  • Running haproxy on Linux operating system with 3. X kernel
  • Do not deploy other applications on the host running haproxy to ensure that haproxy monopolizes the resources and avoid other applications causing failures of the operating system or host
  • At least one standby machine shall be provided for haproxy to deal with the host hardware failure, power failure and other emergencies (the method of building dual active haproxy is described in the following article)
  • The recommended configuration of sysctl (it is not a universal configuration, and it still needs to be adjusted more finely according to the specific situation, but it can be used as the initial configuration of using haproxy for the first time)
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 10000

Installation and operation of haproxy

Here’s how toCentOS7How to install and run the latest stable version of haproxy (1.7.2) in

install

Create users and user groups for haproxy, in this case both are “ha”. Note that if you want haproxy to listen on ports below 1024, you need to start it as root

Download and unzip

wget http://www.haproxy.org/download/1.7/src/haproxy-1.7.2.tar.gz
tar -xzf haproxy-1.7.2.tar.gz

Compile and install

make PREFIX=/home/ha/haproxy TARGET=linux2628
make install PREFIX=/home/ha/haproxy

Prefix is the specified installation path, and target is specified according to the current operating system kernel version

- linux22     for Linux 2.2
- linux24     for Linux 2.4 and above (default)
- linux24e    for Linux 2.4 with support for a working epoll (> 0.21)
- linux26     for Linux 2.6 and above
- linux2628   for Linux 2.6.28, 3.x, and above (enables splice and tproxy)

In this example, our operating system kernel version is 3.10.0, so target is specified as linux2628

Create haproxy profile

mkdir -p /home/ha/haproxy/conf
vi /home/ha/haproxy/conf/haproxy.cfg

Let’s first create the simplest configuration file:

global  # Global properties
    daemon   # Running in the background as a daemon
    maxconn   two hundred and fifty-six   # Maximum simultaneous 256 connections
    pidfile  / home/ha/haproxy/conf/haproxy.pid   # Specifies the file to hold the haproxy process number
defaults  # Default parameters
    mode   http   # HTTP mode
    timeout   connect   5000ms   # Server side connection timeout 5S
    timeout   client   50000ms   # Client response timeout 50s
    timeout   server   50000ms   # Server response timeout 50s
frontend   http-in  # Front end service HTTP in
    bind  *: eight thousand and eighty   # Monitor port 8080
    default_ backend   servers   # The request is forwarded to a back-end service called "servers"
backend   servers  # Back end service servers
    server   server1   127.0.0.1:8000   maxconn   thirty-two   # backend   There is only one back-end service in servers, named Server1, which starts from port 8000 of the local machine. Haproxy can initiate up to 32 connections to this service at the same time

Note: haproxy requires that the ulimit – n parameter of the system be greater than [maxconn * 2 + 18]. When setting a larger maxconn, pay attention to check and modify the ulimit – n parameter

Registering haproxy as a system service

Add in / etc / init. D directoryHaproxy serviceStart / stop script for:

vi /etc/init.d/haproxy
#! /bin/sh
set -e
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/home/ha/haproxy/sbin
PROGDIR=/home/ha/haproxy
PROGNAME=haproxy
DAEMON=$PROGDIR/sbin/$PROGNAME
CONFIG=$PROGDIR/conf/$PROGNAME.cfg
PIDFILE=$PROGDIR/conf/$PROGNAME.pid
DESC="HAProxy daemon"
SCRIPTNAME=/etc/init.d/$PROGNAME
# Gracefully exit if the package has been removed.
test -x $DAEMON || exit 0
start()
{
       echo -e "Starting $DESC: $PROGNAMEn"
       $DAEMON -f $CONFIG
       echo "."
}
stop()
{
       echo -e "Stopping $DESC: $PROGNAMEn"
       haproxy_pid="$(cat $PIDFILE)"
       kill $haproxy_pid
       echo "."
}
restart()
{
       echo -e "Restarting $DESC: $PROGNAMEn"
       $DAEMON -f $CONFIG -p $PIDFILE -sf $(cat $PIDFILE)
       echo "."
}
case "$1" in
 start)
       start
       ;;
 stop)
       stop
       ;;
 restart)
       restart
       ;;
 *)
       echo "Usage: $SCRIPTNAME {start|stop|restart}" >&2
       exit 1
       ;;
esac
exit 0

function

Start, stop and restart:

service haproxy start
service haproxy stop
service haproxy restart

Add log

Haproxy doesn’t directly output file logs, so we need to use rsyslog of Linux to make theHAProxyOutput log

Modify haproxy.cfg

In the global domain and defaults domain, add:

global
    ...
    log 127.0.0.1 local0 info
    log 127.0.0.1 local1 warning
    ...
defaults
    ...
    log global
    ...

This means that the info level (or above) logs are pushed to the local0 interface of rsyslog, and the warn level (or above) logs are pushed to the local1 interface of rsyslog. All frontends use the log configuration in global by default.

Note: Info level log will be printedHAProxyEvery request processed will occupy a lot of disk space. In the production environment, it is recommended to adjust the log level to notice

Configuration of adding haproxy log for rsyslog

vi /etc/rsyslog.d/haproxy.conf
$ModLoad imudp
$UDPServerRun 514
$FileCreateMode   0644   # Log file permissions
$FileOwner   ha   # Owner of log file
local0.*      / var/log/haproxy.log   # Log output file corresponding to local0 interface
local1.*      / var/log/haproxy_ warn.log   # Log output file corresponding to local1 interface

Modify the startup parameters of rsyslog

vi /etc/sysconfig/rsyslog
# Options for rsyslogd
# Syslogd options are deprecated since rsyslog v3.
# If you want to use them, switch to compatibility mode 2 by "-c 2"
# See rsyslogd(8) for more details
SYSLOGD_OPTIONS="-c 2 -r -m 0"

Restart rsyslog and haproxy

service rsyslog restart
service haproxy restart

At this point, you should be able to see the haproxy log file in the / var / log directory

Log segmentation with logrotate

The log output through rsyslog will not be segmented, so we need to rely on the logrotate provided by Linux(Introduction of logrotate service in Linux system)To do the segmentation

Use root to create haproxy log segmentation configuration file

mkdir /root/logrotate
vi /root/logrotate/haproxy
/var/log/haproxy.log  / var/log/haproxy_ warn.log  {  # Two file names of segmentation
    daily         # Split by day
    rotate   seven      # Keep 7 copies
    create   0644   ha   ha   # Permissions, users, user groups for creating new files
    compress      # Compress old logs
    delaycompress   # Delay compression by one day
    missingok     # Ignore errors where the file does not exist
    dateext       # Old log with log suffix
    sharedscripts   # The split restart script runs only once
    postrotate    # After segmentation, run the script to overload rsyslog and let rsyslog output the log to the new log file
      /bin/kill -HUP $(/bin/cat /var/run/syslogd.pid 2>/dev/null) &>/dev/null
    endscript
}

And configure to run in crontab

0 0 * * * /usr/sbin/logrotate /root/logrotate/haproxy

Haproxy builds L7 load balancer

Overall plan

In this section, we will use haproxy to build aL7 load balancingThe following functions are applied

  • load balancing
  • Conversation keeping
  • health examination
  • Forward to different back-end clusters according to URI prefix
  • Monitoring page

The structure is as follows:

It's not difficult to master haproxy from scratch!!!

There are six back-end services in the architecture, which are divided into three groups, with two services in each group

  • MS1: request with service URI prefix MS1 /
  • MS2: request with service URI prefix MS2 /
  • Def: Service other requests

Build back end services

Deploy six back-end services, and you can use any web service, such as nginx, Apache httpd, Tomcat, jetty, etc. the specific web service installation process is omitted.

In this example, we installed three nginx on 192.168.8.111 and 192.168.8.112 hosts respectively

ms1.srv1 - 192.168.8.111:8080
ms1.srv2 - 192.168.8.112:8080
ms2.srv1 - 192.168.8.111:8081
ms2.srv2 - 192.168.8.112:8081
def.srv1 - 192.168.8.111:8082
def.srv2 - 192.168.8.112:8082

The health check page healthcheck.html is deployed in the six nginx services, and the content of the page is arbitrary. Make sure to passhttp://ip:port/healthCheck.ht…

Next, deploy the service page in six nginx services:

  • Deploy MS1 / demo.html in the first group
  • Deploy MS2 / demo.html in the second group
  • Deploy def / demo.html in the third group

Take the content of demo.html deployed on 192.168.8.111:8080 as an example:

Hello! This is ms1.srv1!

It should be deployed on 192.168.8.112:8080

Hello! This is ms1.srv2!

and so on

Building haproxy

Install haproxy on 192.168.8.110 host. The installation and configuration steps of haproxy are described in the previous chapter and omitted here.

Haproxy configuration file:

global
    daemon
    maxconn   thirty thousand    # ulimit  - N is at least 60018
    user ha
    pidfile /home/ha/haproxy/conf/haproxy.pid
    log 127.0.0.1 local0 info
    log 127.0.0.1 local1 warning
defaults
    mode http
    log global
    option   http-keep-alive    # Using keepalive connection
    option   forwardfor         # Record the client IP in the x-forward-for header domain
    option   httplog            # When httplog is turned on, haproxy will record more abundant request information
    timeout connect 5000ms
    timeout client 10000ms
    timeout server 50000ms
    timeout   http-request   20000ms     # The time-out from connection creation to reading the complete HTTP request from the client is used to avoid DOS like attacks
    option   httpchk   GET  / healthCheck.html     # Define the default health check policy
frontend http-in
    bind *:9001
    maxconn   thirty thousand                     # Define maxconn on this port
    acl   url_ ms1   path_ beg  - i  / ms1/     # Define ACL. When URI starts with / MS1 /, ACL [url]_ MS1] is true
    acl   url_ ms2   path_ beg  - i  / ms2/     # Same as above, URL_ ms2
    use_ backend   ms1   if   url_ ms1        # When [url]_ When MS1] is true, it is directed to the back-end service group MS1
    use_ backend   ms2   if   url_ ms2        # When [url]_ When MS2] is true, it is directed to the back-end service group MS2
    default_ backend   default_ servers   # In other cases, direct to the back-end service group_ In servers
backend   ms1     # Define back end service group MS1
    balance   roundrobin     # Using RR load balancing algorithm
    cookie   HA_ STICKY_ ms1   insert   indirect   nocache     # The insert is called "ha"_ STICKY_ MS1 "
    #Define the back-end server [MS1. Srv1]. When the request is directed to the server, the cookie value [MS1. Srv1] will be written in the response
    #Maxconn for this server is set to 300
    #The default health check policy is applied. The interval and timeout of health check is 2000ms. Two times of success is regarded as the node up, and three times of failure is regarded as the node down
    server ms1.srv1 192.168.8.111:8080 cookie ms1.srv1 maxconn 300 check inter 2000ms rise 2 fall 3
    #Same as above, inter   2000ms   rise   two   fall   3 is the default value and can be omitted
    server ms1.srv2 192.168.8.112:8080 cookie ms1.srv2 maxconn 300 check
backend   ms2     # Define back end service group MS2
    balance roundrobin
    cookie HA_STICKY_ms2 insert indirect nocache
    server ms2.srv1 192.168.8.111:8081 cookie ms2.srv1 maxconn 300 check
    server ms2.srv2 192.168.8.112:8081 cookie ms2.srv2 maxconn 300 check
backend   default_ servers     # Define back end service group default_ servers
    balance roundrobin
    cookie HA_STICKY_def insert indirect nocache
    server def.srv1 192.168.8.111:8082 cookie def.srv1 maxconn 300 check
    server def.srv2 192.168.8.112:8082 cookie def.srv2 maxconn 300 check
listen   stats     # Define monitoring page
    bind  *: one thousand and eighty                    # Bind port 1080
    stats   refresh   30s              # Update monitoring data every 30 seconds
    stats   uri  / stats               # Uri to access the monitoring page
    stats   realm   HAProxy   Stats     # Authentication tips of monitoring page
    stats   auth   admin:admin         # Monitor the user name and password of the page

After modification, start haproxy

service haproxy start

test

First, visit the monitoring pagehttp://192.168.8.110:1080/statsAnd press the prompt to input the user name and password

Next, you can see the monitoring page:

It's not difficult to master haproxy from scratch!!!

The monitoring page lists all front and back end services that we have configured, as well as their detailed indicators. Such as connection number, queue condition, session rate, traffic, health status of back-end service, etc

Next, we test the functions configured in haproxy one by one

health examination

You can directly see whether the health check configuration is correct from the monitoring page. As you can see in the figure above, backend MS1, MS2, default_ The status of the six back-end services under servers is 20h28m up, which means that the health status has lasted for 20 hours and 28 minutes, while lastchk shows l7ok / 200 minutes, and 1ms means that L7 health check (i.e. HTTP request health check) has been carried out 1ms ago, and the return code is 200

At this point, we will rename healthcheck.html in ms1.srv1

mv healthCheck.html healthCheck.html.bak

Then go to the monitoring page:

It's not difficult to master haproxy from scratch!!!

The state of ms1.srv1 changes to 2S down, and the last chk is l7sts / 404 in 2ms, which means that the last health check returned 404, and then recovers healthcheck.html. You can see that ms1.srv1 returns to up soon.

Forwarding requests by URI prefix: accesshttp://192.168.8.110:9001/ms1… 

It's not difficult to master haproxy from scratch!!!

You can see that the target is successfully directed to MS1. Srv1

visithttp://192.168.8.110:9001/ms2… :

It's not difficult to master haproxy from scratch!!!

visithttp://192.168.8.110:9001/def… :

It's not difficult to master haproxy from scratch!!!

Load balancing and session keeping strategy

After visiting MS1 / demo.html, MS2 / demo.html and m3 / demo.html respectively, check the cookie of the browser

It's not difficult to master haproxy from scratch!!!

You can see that haproxy has written back three cookies used for session holding. If you refresh these three pages repeatedly, you will find that they are always directed to *. Srv1

Next, we delete ha_ STICKY_ MS1, and then visit MS1 / demo.html to see
It's not difficult to master haproxy from scratch!!!

At the same time, a new cookie has been written

It's not difficult to master haproxy from scratch!!!

If it is found that it is still positioned to ms1.srv1 and no new HA is written_ STICKY_ MS1 cookie, the browser may have cached the MS1 / demo.html page, and the request did not arrive at haproxy. F5 refresh should be OK.

Building L4 load balancer with haproxy

When haproxy works as L4 load balancer, it doesn’t parse any content related to HTTP protocol, only processes packets in transport layer. That is to say, haproxy running in L4 mode can’t realize the functions of forwarding to different back ends according to URL and maintaining session through cookie.

At the same time, haproxy working in L4 mode can not provide monitoring pages.

However, as L4 load balancer, haproxy can provide higher performance, which is suitable for socket based services (such as database, message queue, RPC, mail service, redis, etc.), or does not need logical rule judgment, and has realized HTTP service of session sharing.

Overall plan

In this example, we use haproxy to proxy two HTTP services in L4 mode, and do not provide session persistence.

global
    daemon
    maxconn   thirty thousand    # ulimit  - N is at least 60018
    user ha
    pidfile /home/ha/haproxy/conf/haproxy.pid
    log 127.0.0.1 local0 info
    log 127.0.0.1 local1 warning
defaults
    mode tcp
    log global
    option   tcplog             # Open tcplog
    timeout connect 5000ms
    timeout client 10000ms
    timeout   server   10000ms    # In TCP mode, timeout should be set   Client and timeout   Server is set to the same value to prevent problems
    option   httpchk   GET  / healthCheck.html     # Define the default health check policy
frontend http-in
    bind *:9002
    maxconn   thirty thousand                     # Define maxconn on this port
    default_ backend   default_ servers   # Request directed to back end service group default_ servers
backend   default_ servers     # Define back end service group default_ servers
    balance roundrobin
    server def.srv1 192.168.8.111:8082 maxconn 300 check
    server def.srv2 192.168.8.112:8082 maxconn 300 check

Session keeping in L4 mode

Although haproxy in TCP mode can’t achieve session keeping through HTTP cookie, it can easily achieve session keeping based on client IP. Just put

balance roundrobin
Change to
    balance source

In addition, haproxy provides a powerful stick table function. Haproxy can sample a large number of attributes from transport layer packets and write these attributes into stick table as session holding policies.

Key configuration of haproxy

Overview

There are five domains in haproxy’s configuration file

Global: used to configure global parameters
Default: used to configure the default properties of all frontend and backend
Frontend: used to configure the front-end service (that is, the service provided by haproxy itself) instance
Backend: used to configure the back-end service (that is, the service following haproxy) instance group
Listen: the combination of front end and back end can be understood as a more concise configuration method

Key configuration of global domain

Daemon: Specifies that haproxy runs in background mode, which should be used in general
user [username]  : Specifies the user to which the haproxy process belongs
group [groupname]  : Specifies the user group to which the haproxy process belongs
log  [ address]  [ device]  [ maxlevel]  [ Minlevel]: log output configuration, such as log   127.0.0.1 local0 info warning, that is, the log that outputs info to warning level to local rsyslog or local0 of syslog. Where [minlevel] can be omitted. There are eight levels of haproxy logs, from high to low: emergency / Alert / crit / err / warning / notice / Info / debug
Pidfile: Specifies the absolute path to the file where the haproxy process number is recorded. It is mainly used to stop and restart haproxy process.
Maxconn: the number of connections processed by haproxy process at the same time. When the number of connections reaches this value, haproxy will stop receiving connection requests

Key configuration of frontend domain

acl [name]  [ criterion]  [ flags]  [ operator]  [ Value]: define an ACL, which is the true / false value calculated by the specified expression according to the specified attribute of the packet. Such as "ACL"   url_ ms1   path_ beg  - i  / MS1 / "defines a_ The ACL of MS1, which is true when the request URI starts with / MS1 / (regardless of case)
bind  [ IP]: [port]: the port monitored by frontend service
default_ Backend [name]: the default backend corresponding to frontend
Disabled: disable this frontend
http-request [operation]  [ Condition]: the policy applied to all HTTP requests arriving at this front end, such as rejecting, requiring authentication, adding header, replacing header, defining ACL, etc.
http-response [operation]  [ Condition]: the policy applied to all HTTP responses returned from this frontend is roughly the same as above
Log: the same as the log configuration of the global domain, only applicable to this frontend. If you want to follow the log configuration of the global domain, configure Log here   global
Maxconn: maxconn of the same global domain, only applicable to this frontend
Mode: there are two working modes of frontend: HTTP and TCP, corresponding to L7 and L4 load balancing modes
Option forward for: add x-forward-for header to the request to record the client IP address
Option HTTP keep alive: provide services in keep alive mode
Option httpclose: corresponding to HTTP keep alive, close the keep alive mode. If haproxy mainly provides interface type services, httpclose mode can be considered to save connection resources. But if you do, the caller of the interface will not be able to use the HTTP connection pool
Option httplog: when httplog is turned on, haproxy will record the request log in a format similar to Apache HTTP or nginx
Option tcplog: when tcplog is turned on, haproxy will record more attributes of packets in the transport layer in the log
Stats URI [URI]: open the monitoring page on this front end and access it through [URI]
Stats refresh [time]: monitoring data refresh cycle
Stats auth [user]: [password]: the authentication user name and password of the monitoring page
Timeout client [time]: refers to the time-out when the client does not send data continuously after the connection is created
Timeout HTTP request [time]: refers to the timeout when the client fails to send a complete HTTP request after the connection is created. It is mainly used to prevent DoS attacks. That is, after the connection is created, the request packet is sent at a very slow speed, causing the haproxy connection to be occupied for a long time
use_ backend  [ backend]   If | unless [ACL]: used with ACL to forward to specified backend when ACL is satisfied / not satisfied

Key configuration of backend domain

ACL: same as frontend domain
Balance [algorithm]: the load balancing algorithm between all servers under this background. The commonly used algorithms are roundrobin and source. For a complete description of the algorithm, see the official document configuration.html # 4.2-balance
Cookie: enable cookie based session retention policy between backend servers. The most commonly used method is insert, such as Cookie ha_ STICKY_ MS1 insert indirect nocache, which means that haproxy will insert ha named ha into the response_ STICKY_ MS1's cookie, whose value is the value specified in the corresponding server definition, and determines which server to forward to according to the value of this cookie in the request. Indirect means if the request already contains a valid ha_ STICK_ MSP1 cookie, haproxy will not insert this cookie in the response again, and nocache means that all gateways and cache servers on the link are prohibited from caching the response with set cookie header.
Default server: used to specify the default settings of all servers in this backend. See server configuration below for details.
Disabled: disable this backend
HTTP request / HTTP response: same as frontend domain
Log: same as frontend domain
Mode: same as frontend domain
Option forward for: same as frontend domain
Option HTTP keep alive: same as frontend domain
Option httpclose: same as frontend domain
option httpchk [METHOD]  [ URL]  [ Version]: defines the health check policy in HTTP mode. For example, option httpchk get / healthcheck.html http / 1.1
option httpLog: same as frontend domain
option tcpLog: same as frontend domain
server [name]  [ ip]:[port]  [ Params: defines a back-end server in the backend, and [params] is used to specify the parameters of the server
Check: when this parameter is specified, haproxy will perform health check on this server, and the check method is configured in option httpchk. At the same time, three parameters, inter, rise and fall, can be specified after the check, which represent the cycle of the health check, server up for several consecutive successes and server down for several consecutive failures respectively. The default value is inter 2000ms rise 2 fall 3
Cookie [value]: used to cooperate with session maintenance based on cookie. For example, cookie ms1.srv1 represents that the request submitted to this server will write a cookie with the value of ms1.srv1 in the response (the specific cookie name is specified in the cookie setting in the backend domain)
Maxconn: refers to the maximum number of connections initiated by haproxy to this server at the same time. When the number of connections reaches maxconn, new connections initiated to this server will enter the waiting queue. The default value is 0, which means infinite
Maxqueue: the length of the waiting queue. When the queue is full, subsequent requests will be sent to other servers under this backend. The default value is 0, which means unlimited
Weight: the weight of the server, 0-256. The greater the weight, the more requests will be allocated to the server. Servers with weight 0 will not be assigned any new connections. The default weight of all servers is 1
Timeout connect [time]: the timeout when haproxy attempts to create a connection with the backend server
Timeout check [time]: by default, the connection + response timeout of health check is the inter value specified in the server command. If timeout check is configured, haproxy will take Inter as the connection timeout of health check request and the value of timeout check as the response timeout of health check request
Timeout server [time]: refers to the timeout time for the backend server to respond to the haproxy request

Default field

In the key configurations of the front end and back end domains mentioned above, except ACL, bind, HTTP request, HTTP response, and use_ Except backend, the rest can be configured in the default domain. If there is no configuration in the front or back end domain, the configuration in the default domain will be used.

Listen field

Listen domain is a combination of front domain and back domain. All configurations in front domain and back domain can be configured in listen domain

High availability of haproxy with preserved

Although haproxy is very stable, it still can not avoid the risk of operating system failure, host hardware failure, network failure and even power failure. Therefore, we must implement a high availability scheme for haproxy.

The use ofKeepalivedImplementation of haproxy hot standby scheme. That is, two haproxy instances on two hosts are online at the same time, and the instance with higher weight is master. When master has problems, the other instance will take over all traffic automatically.

principle

There are two instances of kept running on the two haproxys. The two kept instances compete for the same virtual IP address, and the two haproxys also try to bind the ports on the same virtual IP address. Obviously, only one keepalived host can grab the virtual IP at the same time. The haproxy on the keepalived host that grabs the virtual IP is the current master. Keepalived maintains a weight value internally, and the keepalived instance with the highest weight value can grab the virtual IP. At the same time, keepalived will check the haproxy status of the host periodically, and the weight value will increase when the status is OK.

Building haproxy primary and standby cluster

Environmental preparation

Install and configure haproxy on two physical computers. In this example, two sets of identical haproxy will be installed on 192.168.8.110 and 192.168.8.111 hosts. The specific steps are omitted. Please refer to the section “building L7 load balancer with haproxy”.

Install keepalived

Download, unzip, compile, install:

wget http://www.keepalived.org/software/keepalived-1.2.19.tar.gz
tar -xzf keepalived-1.2.19.tar.gz
./configure --prefix=/usr/local/keepalived
make
make install

Register as a system service:

cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
chmod +x /etc/init.d/keepalived

Note: keepalived needs to be installed and configured by root

Configure keepalived

Create and edit configuration files

mkdir -p /etc/keepalived/
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
vi /etc/keepalived/keepalived.conf

Configuration file content:

global_defs {
    router_ id   LVS_ DEVEL   # Virtual route name
}
#Haproxy health check configuration
vrrp_script chk_haproxy {
    script  " killall  - 0   haproxy"   # Using kill  - 0 check whether haproxy instance exists, the performance is higher than PS command
    interval   two    # Script run cycle
    weight   two    # Weighted value of each check
}
#Virtual routing configuration
vrrp_instance VI_1 {
    state   MASTER            # Local instance status, master / backup. Please write backup in standby configuration file
    interface   enp0s25       # Name of local network card, use ifconfig command to view
    virtual_ router_ id   fifty-one    # The virtual route number should be consistent between the active and standby machines
    priority   one hundred and one            # Initial weight of local machine, please fill in the value less than that of host machine for standby machine (for example, 100)
    advert_ int   one            # Period of contention for virtual address, seconds
    virtual_ipaddress {
        192.168.8.201       # The virtual IP address should be consistent with the active and standby computers
    }
    track_script {
        chk_ haproxy         # Corresponding health check configuration
    }
}

If the host does not have the Kill Command, you need to install the psmisc package:

yum intall psmisc

Start two keepalived

service keepalived start

verification

After starting, check who holds the virtual IP 192.168.8.201 on the two hosts and execute the command:

ip   addr   sh   enp0s25    ( Replace enp0s25 with the network card name of the host)

The output of the host with virtual IP is as follows:

It's not difficult to master haproxy from scratch!!!

The output of another host is as follows:

It's not difficult to master haproxy from scratch!!!

If you start the standby machine’s kept alive first, it is very likely that the virtual IP will be snatched by the standby machine, because the weight configuration of the standby machine is only 1 lower than that of the host machine. As long as you perform a health check, the weight can be increased to 102, which is higher than that of the host machine’s 101.

Visit at this timehttp://192.168.8.201:9001/ms1…, you can see our previously deployed web pages.

At this time, check / var / log / haproxy. Log, and you can see that the request falls on the host that has captured the virtual IP.

Next, we stop the haproxy instance (or keep alive instance) of the current master host

service haproxy stop

Visit againhttp://192.168.8.201:9001/ms1…, and check / var / log / haproxy.log of the standby machine. You can see that the request falls on the standby machine, and the master / standby automatic switch is successful.

You can also execute the IP addr sh enp0s25 command again, and you will see that the virtual IP is robbed by the standby machine.

In / var / log / message, you can also see the switch log of keepalived output

It's not difficult to master haproxy from scratch!!!

Author: kelgon
Link:https://www.jianshu.com/p/c9f…

It's not difficult to master haproxy from scratch!!!