Configuring TCP load balancing using nginx

Time:2021-11-7

Nginx is one of the better open source web servers, but it can also be used as TCP and UDP load balancers. One of the main benefits of using nginx as haproxy’s load balancer is that it can also load balance UDP based traffic. In this article, we will demonstrate how to configure nginx to load balance applications deployed in the kubernetes cluster.
Assuming that the kubernetes cluster has been configured, we will create a virtual machine for nginx based on CentOS.

The following is the details of the experimental species settings:

Nginx (CenOS8 Minimal) – 192.168.1.50
Kube Master – 192.168.1.40
Kube Worker 1 – 192.168.1.41
Kube Worker 2 – 192.168.1.42
Step 1) install EPEL warehouse
Because nginx software package does not exist in the default warehouse of CentOS system, EPEL warehouse needs to be installed:

[[email protected] ~]# dnf install epel-release -y
Step 2) install nginx
Run the following command to install nginx:

[[email protected] ~]# dnf install nginx -y
Verify the details of the nginx package using the RPM command:

[[email protected] ~]# rpm -qi nginx
Configure TCP load balancing using nginx configure TCP load balancing using nginx
Configure firewall to allow access to HTTP and HTTPS services of nginx:

[[email protected] ~]# firewall-cmd –permanent –add-service=http
[[email protected] ~]# firewall-cmd –permanent –add-service=https
[[email protected] ~]# firewall-cmd –reload
Use the following command to set SELinux to permissive mode and restart the system to make SELinux shutdown take effect:

[[email protected] ~]# sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config
[[email protected] ~]# reboot
Step 3) get the nodeport details of the application from kubernetes
[[email protected] ~]$ kubectl get all -n ingress-nginx
Configure TCP load balancing using nginx configure TCP load balancing using nginx
As can be seen from the above output, nodeport 32760 of each working node is mapped to port 80, and nodeport 32375 is mapped to port 443. We will use these node ports in the nginx configuration file for load balancing.

Step 4) configure nginx for load balancing
Edit the nginx configuration file and add the following:

[[email protected] ~]# vim /etc/nginx/nginx.conf
Comment out the “server” section (lines 38 to 57):
Configure TCP load balancing using nginx configure TCP load balancing using nginx
And add the following lines:

upstream backend {
server 192.168.1.41:32760;
server 192.168.1.42:32760;
}

server {
listen 80;
location / {

   proxy_read_timeout 1800;
   proxy_connect_timeout 1800;
   proxy_send_timeout 1800;
   send_timeout 1800;
   proxy_set_header        Accept-Encoding   "";
   proxy_set_header        X-Forwarded-By    $server_addr:$server_port;
   proxy_set_header        X-Forwarded-For   $remote_addr;
   proxy_set_header        X-Forwarded-Proto $scheme;
   proxy_set_header Host $host;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_pass http://backend;

}

location /nginx_status {
    stub_status;
}

}
Configure TCP load balancing using nginx configure TCP load balancing using nginx
Save the configuration file and exit.
Configure TCP load balancing using nginx configure TCP load balancing using nginx
According to the above changes, all requests to port 80 of nginx will be routed to the nodeport (32760) port of the kubernetes work node (192.168.1.41 and 192.168.1.42).

Enable the nginx service using the following command:

[[email protected] ~]# systemctl start nginx
[[email protected] ~]# systemctl enable nginx
Test the TCP load balancer of nginx
To test whether the TCP load balancing of nginx as kubernetes works normally, deploy the nginx based deployment, expose the deployment port to port 80, and define the entry resource for the nginx deployment. I have deployed these kubernetes objects using the following command:

[[email protected] ~]$ kubectl create deployment nginx-deployment –image=nginx
deployment.apps/nginx-deployment created
[[email protected] ~]$ kubectl expose deployments nginx-deployment –name=nginx-deployment –type=NodePort –port=80
service/nginx-deployment exposed
Run the following command for deployments, SVC, and ingress details:
Configure TCP load balancing using nginx configure TCP load balancing using nginx
Update the hosts file of the local host so that nginx-lb.example.com points to the IP address of the nginx server (192.168.1.50)

[[email protected] ~]# echo “192.168.1.50 nginx-lb.example.com” >> /etc/hosts
Try to visit nginx-lb.example.com through your browser
Configure TCP load balancing using nginx configure TCP load balancing using nginx

summary
The above confirms that nginx can work normally as a TCP load balancer because it can load balance the TCP traffic on port 80 between k8s working nodes.