Introduction to bronze in nginx series reverse proxy load balancing is so simple

Time:2021-2-11

1. What is nginx?

Nginx is a high-performance free, open source HTTP andReverse proxy server, characterized byLess memoryHigh concurrency

What can nginx do?

  • Can be used as IMAP, POP3, SMTP proxy server;
  • It can be used as an HTTP server for website publishing;
  • It can be used as a reverse agent for load balancing;

2. Installation of nginx

2.1 upload nginx related installation package to the server

[[email protected] nginx-1.12]# ls -l
Total consumption 2956
-Rw-r — R –. 1 root 981687 Dec 21 16:09 nginx-1.12.2 tar.gz
-Rw-r — R –. 1 root 2041593 December 21 16:09 pcre-8.37 tar.gz

nginx-1.12.2.tar.gz: nginx source package, used to install nginx

pcre-8.37.tar.gzPerl library is a regular expression function library written in C language.

2.2 installation of PCRE

  • Decompress PCRE source code installation package

[[email protected] nginx-1.12]# tar zxf pcre-8.37.tar.gz

  • Compile and install PCRE

[[email protected] pcre-8.37]# ./configure
checking for a BSD-compatible install… /usr/bin/install -c
checking whether build environment is sane… yes
checking for a thread-safe mkdir -p… /usr/bin/mkdir -p
checking for gawk… gawk
checking whether make sets $(MAKE)… yes
checking whether make supports nested variables… yes
checking whether make supports nested variables… (cached) yes
checking for style of include used by make… GNU
checking for gcc… no
checking for cc… no
checking for cl.exe… no
configure: error: in `/root/nginx-1.12/pcre-8.37′:
configure: error: no acceptable C compiler found in $PATH
See `config.log’ for more details

If the above error occurs, it means that we do not have a C compiler and need to install GCC / gcc-c + + (GCC is a C compiler and gcc-c + + is a C + + compiler)

  • Installing GCC and gcc-c++

[email protected] pcre-8.37]# yum install gcc gcc-c++ -y

  • Recompile PCRE

[[email protected] pcre-8.37]# ./configure
[[email protected] pcre-8.37]# make && make install

. / configure: check the relevant configuration of the current system, or specify the relevant configuration parameters through parameters

Make: compile

Make install: install

If there is no error in the above operation, it means that the compilation and installation is completed

  • Test whether PCRE is installed successfully

If the version number can be echoed by executing PCRE config — version, it means that PCRE is installed successfully

[[email protected] pcre-8.37]# pcre-config –version
8.37

2.3 installation of other components

[[email protected]ost pcre-8.37]# yum install -y make zlib zlib-devel libtool openssl openssl-develb

2.4 installing nginx

  • Unzip nginx source code installation package

[[email protected] nginx-1.12]# tar zxf nginx-1.12.2.tar.gz

  • Compile and install nginx

[[email protected] nginx-1.12]# ls
apache-tomcat-7.0.70.tar.gz nginx-1.12.2 nginx-1.12.2.tar.gz pcre-8.37 pcre-8.37.tar.gz
[[email protected] nginx-1.12]# cd nginx-1.12.2
[[email protected] nginx-1.12.2]# ls
auto CHANGES CHANGES.ru conf configure contrib html LICENSE man README src
[[email protected] nginx-1.12.2]# ./configure
[[email protected] nginx-1.12.2]# make && make install

. / configure: check the relevant configuration of the current system, or specify the relevant configuration parameters through parameters

Make: compile

Make install: install

2.5 start nginx and test whether nginx is installed successfully

  • Start nginx

/Usr / local / nginx: the default path of nginx source code installation

[[email protected] nginx-1.12.2]# cd /usr/local/nginx/sbin/

Nginx: binary file of nginx, used to start, stop service, reload configuration file, etc

[[email protected] sbin]# ./nginx

Through PS – AEF | grep nginxf, it is found that the related process already exists

[[email protected] sbin]# ps -aef | grep nginx
root 24981 1 0 17:16 ? 00:00:00 nginx: master process ./nginx
nobody 24982 24981 0 17:16 ? 00:00:00 nginx: worker process
root 24985 9621 0 17:18 pts/1 00:00:00 grep –color=auto nginx

Through netstat – tualnp, it is found that nginx is listening on port 80

Introduction to bronze in nginx series reverse proxy load balancing is so simple

Visit port 80 of nginx server to test whether nginx can be accessed normally

Introduction to bronze in nginx series reverse proxy load balancing is so simple

If the above situation occurs, it is caused by firewall filtering. At this time, it can be solved by adding the rule list of port 80 or closing the firewall.

Firewall CMD: View firewall

For the firewall CMD command, please refer to: https://wangchujiang.com/linux-command/c/firewall-cmd.html

[[email protected] sbin]# firewall-cmd –list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens33
sources:
services: dhcpv6-client ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

  • Solution 1: add port 80 to the firewall rule list

#Add port 80 to firewall rules
[[email protected] sbin]# firewall-cmd –permanent –add-port=80/tcp
success

Reload the firewall without interrupting the established connection

[[email protected] sbin]# firewall-cmd –reload
success

  • Solution 2: turn off the firewall (recommended)

#Stop firewall
[[email protected] sbin]# systemctl stop firewalld

Forbid the firewall to boot automatically

[[email protected] sbin]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Visit port 80 of nginx server again, and nginx can access it normally

Introduction to bronze in nginx series reverse proxy load balancing is so simple

Or through the curl nginx server IP address, you can also test

[[email protected] sbin]# curl 192.168.245.130
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>

body {
    width: 35em;
    margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif;
}

</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>

<p>Thank you for using nginx.</p>
</body>
</html>

3. Common commands

Note: the command operation of nginx must be in theSBIN directory under the installation directory of nginxThe default location of the operation in (which can be solved by configuring environment variables) is/usr/local/nginx/sbinlower

If you don’t know the directory location of nginx, you can use the where is command to get it

  • Get nginx file location

[[email protected] sbin]# whereis nginx
nginx: /usr/local/nginx

  • View nginx version information

[[email protected] sbin]# ./nginx -v
nginx version: nginx/1.12.2

  • start nginx

[[email protected] sbin]# ./nginx

  • Shut down nginx

[[email protected] sbin]# ./nginx -s stop

  • Reload nginx (no need to restart the nginx server and reread it) nginx.conf Configuration file)

[[email protected] sbin]# ./nginx -s reload

4. Configuration file analysis

The main configuration file of nginx is located in:/usr/local/nginx/conf/nginx.conf

4.1 document structure

  • Global block

From the beginning of the configuration file to the contents of the events block, the main settings areConfiguration instructions that affect the overall runtime of nginx server

user nobody;

worker_ Processes: the larger the value, the more concurrent data can be processed

worker_processes 1;

error_log logs/error.log;

error_log logs/error.log notice;

error_log logs/error.log info;

pid logs/nginx.pid;

  • Events block

Configuration information of network connection between nginx server and user

worker_ Connections: the maximum number of connections supported by nginx

events {

worker\_connections  1024;

}

  • HTTP block

It is used to configure proxy, cache, log and other related functions and third-party modules. For example, reverse proxy, load balancing and so on are all implemented by configuring HTTP blocks. The HTTP block containsHTTP global blockandServer block

HTTP global block

It includes file import, mime-type definition, log customization, connection timeout, single link request limit and so on.

include mime.types;
default_type application/octet-stream;

log_format main ‘$remote\_addr – $remote_user [$time\_local\] “$request” ‘

‘$status $body_bytes_sent “$http_referer” ‘

‘”$http\_user\_agent” “$http_x_forwarded_for”‘;

access_log logs/access.log main;

sendfile on;

tcp_nopush on;

keepalive_timeout 0;

keepalive_timeout 65;

gzip on;

Server block

Each HTTP block can contain multiple server blocks, and each server block is equivalent to a virtual host.

Virtual hostIt can be understood that a physical server (nginx server) is divided into several virtual servers by means of server block to provide access for users.

server {

#Server global block
#Configure the monitor configuration of the virtual host and the name and IP configuration of the virtual host
listen       80;
server\_name  localhost;

#charset koi8-r;

#access\_log  logs/host.access.log  main;

#Location block
#According to the received request string, a string other than the virtual host name is matched,
#It has the functions of processing specific requests, address redirection, data caching and response control
location / {
    root   html;
    index  index.html index.htm;
}

#error\_page  404              /404.html;

# redirect server error pages to the static page /50x.html
error\_page   500 502 503 504  /50x.html;
location = /50x.html {
    root   html;
}

}

5. Configuration examples

5.1 reverse agency

What is reverse proxy?

Reverse proxy refers to the proxy server to receive the connection request from the Internet, and then forward the request to the server on the internal network, and return the result from the internal server to the client from the Internet through the proxy server.

Introduction to bronze in nginx series reverse proxy load balancing is so simple

For example, when we (users) rent a house (web server), we usually rent it through the platform (proxy server). At this time, we contact the platform, and we don’t know who the landlord is. Another possibility is that the landlord entrusts his friends to manage it. At this time, the contact with us is not the landlord himself, but his friends (proxy server). This process is called reverse proxy. The landlord’s friends also assume the role of “proxy server”.

Case 1

Demand:

visit192.168.245.130:80Port, proxy to192.168.245.131:8080port

1. Install JDK (Tomcat depends on JDK environment)

JDK is the software development kit of Java language. JDK is the core of the whole java development. It includes the running environment of Java (JVM + java system class library) and java tools.

Install JDK, because Tomcat needs JDK environment support

  • Upload JDK and Tomcat to the server.

[[email protected] ~]# ls -l
Total consumption 196020
-Rw-r — R –. 1 root 9830232 December 22 15:40 apache-tomcat-8.0.33.zip
-Rw-r — R –. 1 root 190890122 December 22 15:41 jdk-8u171-linux-x64 tar.gz

  • decompressionjdk-8u171-linux-x64.tar.gzreach/usr/localIn the catalog

[[email protected] ~]# tar zxf jdk-8u171-linux-x64.tar.gz -C /usr/local/
[[email protected] jdk1.8.0_171]# pwd
/usr/local/jdk1.8.0_171
[[email protected] jdk1.8.0_171]# ls -l
Total consumption 25964
Drwxr-xr-x. 2 10 143 4096 March 29 2018 bin
-R — R — R –. 1 10 143 3244 March 29 2018 copyright
Drwxr-xr-x. 4 10 143 122 March 29 2018 DB
Drwxr-xr-x. 3 10 143 132 March 29 2018 include
-Rw-r — R –. 1 10 143 5203779 March 29 2018 JavaFX- src.zip
Drwxr-xr-x. 5 10 143 185 March 29 2018 JRE
Drwxr-xr-x. 5 10 143 245 March 29 2018 Lib
-R — R — R –. 1 10 143 40 March 29 2018 license
Drwxr-xr-x. 4 10 143 47 March 29 2018 man
-R — R — R –. 1 10 143 159 March 29 2018 README.html
-Rw-r — R –. 1 10 143 424 March 29 2018 release
-Rw-r — R –. 1 10 143 21098592 March 29, 2018 src.zip
-Rw-r — R –. 1 10 143 106782 March 29, 2018 thirdparty license era DME- JAVAFX.txt
-R — R — R –. 1 10 143 145180 March 29 2018 thi RDPARTYLICENSEREADME.txt

  • edit/etc/profileConfigure Java environment variables and add the following contents at the tail

Introduction to bronze in nginx series reverse proxy load balancing is so simple

#Configure Java environment variables
export JAVA_HOME=/usr/local/jdk1.8.0_171
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA\_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA\_HOME/bin:$JAVA_HOME/jre/bin:$PATH

  • Enter javac – version, Java – version to see if there is version echo, if there is, it means JDK has been installed

[[email protected] ~]# java -version
java version “1.8.0_171”
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
[[email protected] ~]# javac -version
javac 1.8.0_171

2. Install Tomcat

Tomcat server is a free open source web application server, which is usually used to deploy web applications written in Java language.

  • Unzip Tomcat to/usr/local/catalog

Check if there is a Tomcat file in this directory

[[email protected] ~]# ls
apache-tomcat-8.0.33.zip jdk-8u171-linux-x64.tar.gz

Install unzip tool command

[[email protected] ~]# yum install -y unzip

Unzip the Tomcat installation file to the / usr / local directory

[[email protected] ~]# unzip apache-tomcat-8.0.33.zip -d /usr/local/

Enter the / usr / local directory

[[email protected] ~]# cd /usr/local/

Check whether there is a Tomcat file directory after decompression

[[email protected] local]# ls
apache-tomcat-8.0.33 bin etc games include jdk1.8.0_171 lib lib64 libexec sbin share src

Execute 755 permissions for all files in the Apache Tomcat directory

[[email protected] local]# chmod 755 -R apache-tomcat-8.0.33/

Enter the bin directory under the Tomcat directory

[[email protected] local]# cd apache-tomcat-8.0.33/bin/
[[email protected] bin]# ls
bootstrap.jar catalina-tasks.xml configtest.bat digest.bat setclasspath.sh startup.bat tomcat-native.tar.gz version.bat
catalina.bat commons-daemon.jar configtest.sh digest.sh shutdown.bat startup.sh tool-wrapper.bat version.sh
catalina.sh commons-daemon-native.tar.gz daemon.sh setclasspath.bat shutdown.sh tomcat-juli.jar tool-wrapper.sh

Run with bash command startup.sh Script to start Tomcat server

[[email protected] bin]# bash startup.sh
Using CATALINA_BASE: /usr/local/apache-tomcat-8.0.33
Using CATALINA_HOME: /usr/local/apache-tomcat-8.0.33
Using CATALINA_TMPDIR: /usr/local/apache-tomcat-8.0.33/temp
Using JRE_HOME: /usr/local/jdk1.8.0_171/jre
Using CLASSPATH: /usr/local/apache-tomcat-8.0.33/bin/bootstrap.jar:/usr/local/apache-tomcat-8.0.33/bin/tomcat-juli.jar
Tomcat started.

Monitoring logs directory catalina.out File log content, check whether there is an error message, if not, it means that the startup is successful

[[email protected] bin]# tail -100f ../logs/catalina.out

Introduction to bronze in nginx series reverse proxy load balancing is so simple

Seeing the above indicates that Tomcat has started successfully

  • When accessing Tomcat server (192.168.245.131), the browser displays the following contents, indicating that Tomcat can be accessed normally

Introduction to bronze in nginx series reverse proxy load balancing is so simple

If the content cannot be displayed normally, it is generally a firewall problem. Execute the following code and refresh the access again

Add port 8080 to firewall rule list

[[email protected] bin]# firewall-cmd –add-port=8080/tcp –permanent
success

Reload firewall configuration file

[[email protected] bin]# firewall-cmd –reload
success

3. Configure nginx reverse proxy

explain:
The host nginx-01 is nginx server IP: 192.168.245.130
The host nginx-02 is Tomcat server IP: 192.168.245.131
The browser accesses port 80 of nginx-01 (192.168.245.130) and proxy to port 8080 of nginx-02 (192.168.245.131). The browser displays Tomcat content of nginx-02 (192.168.245.131), indicating that the reverse proxy is successful

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  • edit nginx.conf Master profile

vim /usr/local/nginx/conf/nginx.conf

Configuring server can be understood as a virtual host

server {

#Listening port
listen       80;
#Monitor host
server\_name  192.168.245.130;

#Path location
location / { 
    root   html;
    index  index.html index.htm;
    #Proxy server, Tomcat server address
    proxy\_pass http://192.168.245.131:8080/;
}

}

  • Reread nginx.conf Configuration file to make the configuration take effect

[[email protected] sbin]# ls
nginx
[[email protected] sbin]# ./nginx -s reload

Visit nginx-01 (192.168.245.130) to check the browser effect

Introduction to bronze in nginx series reverse proxy load balancing is so simple

Case 2

Demand:

visit192.168.245.130/houseAgent to192.168.245.131:8080/house

visit192.168.245.130/foodAgent to192.168.245.132:8090/food

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  1. Installing Tomcat and JDK on nginx-02 and nginx-03

    This has been explained in detail in case 1 and will not be repeated here.

  2. Configure the site directory in nginx-02 and nginx-03

    • nginx-02(host:192.168.245.131:8080)
! [wechat screenshot_ 20191231014351.png]( https://i.loli.net/2019/12/31/qA4TRHo6gaSjWtN.png )

*   nginx-03(host:**192.168.245.132:8090**)
    

! [wechat screenshot_ 20191231020227.png]( https://i.loli.net/2019/12/31/SsiP71fYqczyx4Z.png )

If multiple Tomcats are deployed on the same server, the external access port 8080 of Tomcat needs to be modified to avoid port conflict;

If Tomcat is deployed on different servers, there will be no port conflict, so the port can be modified. Of course, if you want to modify it, you can;

  1. Modify the Tomcat service port number of nginx-03

[[email protected] conf]# pwd
/usr/local/apache-tomcat-8.0.33-8090/conf
[[email protected] conf]# vim server.xml

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  1. Start Tomcat service of nginx-02 and nginx-03
  • nginx-02

    Introduction to bronze in nginx series reverse proxy load balancing is so simple

    Introduction to bronze in nginx series reverse proxy load balancing is so simple

  • nginx-03

    Introduction to bronze in nginx series reverse proxy load balancing is so simple

    Introduction to bronze in nginx series reverse proxy load balancing is so simple

  1. Configuring nginx reverse proxy

    • edit nginx.conf File, add agent configuration
! [wechat screenshot_ 20191231022042.png]( https://i.loli.net/2019/12/31/OKoUEFRlsjf15y7.png )

server {

#Listening port
listen       80; 
#Indicates to listen to the local host. It is recommended to write IP address
server\_name  localhost;

#charset koi8-r;

#access\_log  logs/host.access.log  main;

#When the requested URL contains the content of house, the proxy to http://192.168.245.131 : 8080 / house
location ~ /house {
    proxy\_pass http://192.168.245.131:8080;
}

#When the requested URL contains the content of food, the proxy to http://192.168.245.132 : 8090 / food
location ~ /food {
    proxy\_pass http://192.168.245.132:8090;
}

}

  • Location instruction description

Grammar:

location [= | ~ | ~* | ^~ | ^~] uri {

}

  1. =: before using for URI without regular expression, the request string must be strictly matched with the URI. If the match is successful, the search will stop and the request will be processed immediately.
  2. ~: used to indicate that a URI contains regular expressions and is case sensitive
  3. ~*: used to indicate that a URI contains regular expressions and is case insensitive
  4. ^~: before using for URI without regular expression, the nginx server is required to find the location with the highest matching degree between the identification URI and the request string, and use this location to process the request immediately instead of using the regular URI in the location block to match the request string.

Note: if the URI contains regular expressions, it must have the ~ or ~ * identifier.

  1. Reloadnginx.confConfiguration file to make the configuration take effect

[[email protected] sbin]# ./nginx -s reload

  1. Effect display
  • visit192.168.245.130/housereturn192.168.245.131:8080/house

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  • visit192.168.245.130/foodreturn192.168.245.132:8090/food

Introduction to bronze in nginx series reverse proxy load balancing is so simple

5.2 load balancing

What is load balancing?

load balance Load balancing is a kind of computer technology, which is used to distribute load among multiple computers (computer clusters), network connections, CPUs, disk drives or other resources, so as to optimize resource utilization, maximize throughput, minimize response time and avoid overload. Using multiple server components with load balancing to replace a single component can improve reliability through redundancy. Load balancing service is usually completed by dedicated software and hardware. The main function is to allocate a large number of jobs to multiple operation units for execution, which is used to solve the problems in the Internet architectureHigh concurrencyandHigh availabilityIt’s a matter of time.

Quoted from Wikipedia

For example: in order to improve the service ability of a hotel, a hotel may employ multiple chefs, and these chefs form a chef cluster. When users order in the store, they need a professional who can evenly distribute all customers’ menus to the cooks in the store. Only in this way can we improve the service ability of the hotel to the greatest extent.

Introduction to bronze in nginx series reverse proxy load balancing is so simple

Load balancing configuration with nginx

Requirement: visit nginx-01 (192.168.245.130:80) to balance traffic to nginx-02 (192.168.245.131:8080) and nginx-03 (192.168.245.132:8080)

  1. Configure nginx-02 (192.168.245.131:8080)

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  1. Configure nginx-03 (192.168.245.132:8080)

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  1. Configure nginx-01 (192.168.245.130:80) nginx load balancing

Introduction to bronze in nginx series reverse proxy load balancing is so simple

upstream tomcatserver{

server 192.168.245.131:8080;
server 192.168.245.132:8080;

}

server {

listen       80;
server\_name  localhost;

#charset koi8-r;

#access\_log  logs/host.access.log  main;

location / {
    proxy\_pass http://tomcatserver;
}

#error\_page  404              /404.html;

# redirect server error pages to the static page /50x.html
#
error\_page   500 502 503 504  /50x.html;
    location = /50x.html {
    root   html;
}

}

  1. Effect display

Introduction to bronze in nginx series reverse proxy load balancing is so simple

Nginx load balancing strategy

In order to facilitate us to solve the load problem, nginx provides several load balancing algorithm modes, so that we can choose the load balancing mode according to the demand scenario.

  • Polling policy (default algorithm)

upstream tomcatserver{

server 192.168.245.131:8080;
server 192.168.245.132:8080;
server 192.168.245.133:8080;
server 192.168.245.134:8080;

}

Polling algorithm is the default algorithm for nginx to achieve load balancing. The polling strategy selects the servers in the group (the server nodes configured by the upstream module) in order to process the requests. If a server makes an error in the process of processing a request, the request will be handed over to the next server in the group in order for processing, and so on, until the normal response is returned. If all servers in the group have errors, the processing result of the last server will be returned.

  • Weight policy (weighted polling)

upstream tomcatserver{

server 192.168.245.131:8080 weight=5;
server 192.168.245.132:8080 weight=3;
server 192.168.245.133:8080;
server 192.168.245.134:8080;

}

The weight is set for the servers in the upstream group, and the server with high weight value is used to process the request preferentially. At this time, the selection strategy of servers in the group is weighted polling. The weight value of all servers in the group is 1 by default, that is, polling is used to process requests.

  • ip_ Hash strategy

upstream tomcatserver{

ip\_hash;
server 192.168.245.131:8080;
server 192.168.245.132:8080;
server 192.168.245.133:8080;
server 192.168.245.134:8080;

}

ip_ Hash is used to implement the session keeping function, which can redirect the multiple requests of a client to the same server in the group to ensure the stable session between the client and the server. Only when the server is in an invalid state (down machine), the client request will be received and processed by the next server.Note: use IP_ Weight cannot be used after hash;

  • Fair (third party algorithm)

upstream tomcatserver{

fair;
server 192.168.245.131:8080;
server 192.168.245.132:8080;
server 192.168.245.133:8080;
server 192.168.245.134:8080;

}

Requests are allocated according to the response time of the servers in the upstream group, and those with short response time are allocated first,Third party module support is required

5.3 separation of dynamic and static

What is the separation of motion and static?

Before we talk about the separation of motion and stillness, let’s talk about what is motion and what is stillness.

General website can be divided into static website and dynamic website. Static website refers toThere is no need to visit the website of database resources, usually referred to as a static website.You need to get the database information by querying the databaseWe call it a dynamic website_ A dynamic website is not a website with animation effect, but a dynamic website.

  • Before separation

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  1. The browser requests the web server tomcat, which stores the static resource files, such as pictures, CSS style files, JavaScript script files,. HTML files and dynamic script files JSP, that the website needs to use when it runs.
  2. After Tomcat receives the user’s request, if the requested JSP needs to access the database, it will go to the database to query the relevant data and carry out the relevant logical processing. After the processing, it will generate HTML from the execution result and return it to the browser through Tomcat.
  3. The browser gets the HTML returned by Tomcat for rendering. When it encounters pictures, CSS and JS script resources, the browser asynchronously requests the Tomcat server to get the file. Tomcat returns the resource file requested by the browser. The browser continues to render and iterates this operation until all the resources required for the HTML page are loaded.
  4. The browser displays the rendering effect, that is, the page rendered by the browser.
  • After separation

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  1. The browser requests the nginx server, and the nginx server stores static resources. Then nginx directly responds to the browser with the static resources requested by the user, and the browser gets the. HTML file and renders it. In the rendering process, if you need to load other static resources, you can request the nginx server to get them and iterate this operation.
  2. When the browser needs to request an interface, it first requests the nginx server. When nginx sees that this is the back-end interface address, it sends the request to the back-end Tomcat server through the reverse proxy.
  3. Tomcat server processes the request from nginx. When Tomcat needs to query the database, it can access the database again. Tomcat parses the interface address sent by nginx. After processing, it generates. HTML / JSON content and returns it to nginx server, which then responds to the browser.
  4. The browser gets the content of nginx response to render.
  5. The browser displays the rendering effect, that is, the page rendered by the browser.

Advantages of static and dynamic separation

  1. API interface service: after the separation of dynamic and static, the back-end application is more service-oriented, just by providing the API interface, it can be used for multiple functional modules or even multiple platforms, which can effectively save back-end manpower and facilitate functional maintenance.
  2. Parallel development of front and back ends: front and back ends only need to care about the interface protocol, and their respective development does not interfere with each other. Parallel development and parallel self-test can effectively improve the development time, and some can reduce the joint debugging time.
  3. Reduce the back-end server pressure, improve the static resource access speed: the back-end does not need to render the template as HTML back to the user, and the static server can use more professional technology to improve the static resource access speed.
  4. After the separation of dynamic and static services, even if the dynamic services are not available, the static resources will not be affected

Configuration of static and dynamic separation by nginx

In order to better simulate the effect of static and dynamic separation. Our needs are as follows:

  • When accessing nginx (192.168.245.130), we access our back-end Tomcat through nginx proxy.
  • Access to static resources (192.168.245.130/ index.html )Visit the static resource site on nginx
  • Tomcat is used as a Tomcat cluster to avoid the unavailability of other Tomcat services due to the death of a single Tomcat.
  • Tomcat is dead and static pages can be accessed normally.
  1. Configure Tomcat service and use tomcat7 and tomcat8 to simulate the back end

    • tomcat7(:192.168.245.132:8080)
![image.png](https://i.loli.net/2020/01/01/Xt6DYVWcrMsjo7n.png)

*   tomcat-8(192.168.245.131:8080)
    

![image.png](https://i.loli.net/2020/01/01/18xyP26HeqDUdGS.png)
  1. Configure static services

Introduction to bronze in nginx series reverse proxy load balancing is so simple

Nginx source code installation, the default static resource directory in general/usr/local/nginx/htmlDirectory, where the file of nginx welcome page is stored by default

  • Configure static resources in nginx configuration file

Introduction to bronze in nginx series reverse proxy load balancing is so simple

location ~ .*\.(html|htm|gif|jpg|jpeg|bmp|png|ico|js|css)$ {

root /usr/local/nginx/html/food;

}

The root command is used to specify the location of the resource directory, which means to access the static resource file containing HTML, GIF, JPG, PNG and other suffixes in the URI of the request/usr/local/nginx/html/foodResources in this directory.

  • Test access to nginx static resources (192.168.245.130:80)/ index.html )

Introduction to bronze in nginx series reverse proxy load balancing is so simple

  1. Configuring dynamic services in nginx

Introduction to bronze in nginx series reverse proxy load balancing is so simple

All non static resource requests are intercepted by proxy_ Pass is sent to the upstream module named Tomcat server for load balancing (polling policy is used by default), and the request is forwarded to the back-end Tomcat server to realize the load and Tomcat cluster.

So far, the basic configuration of separation of motion and static is completed.

explain:

If you access the back-end service through nginx at this time, it is normal if the following situation occurs.

Introduction to bronze in nginx series reverse proxy load balancing is so simple

The reasons are as follows: when accessing the back-end service, the static resource file needs to be loaded, which triggers the access rule of the static resource. At this time, the static resource file will be loaded/usr/local/nginx/html/foodBut in fact, the static resource required by Tomcat is not in this directory, so we can’t find this resource. It’s not surprising that the page style resource file can’t find report 404. But the back-end service can be accessed normally. In general, the website architecture after the separation of dynamic and static, the back end only provides the access service of the interface, here is just to demonstrate the effect.

Recommended Today

Build HTTP service with C + + Mongoose

Mongoose source code address:https://github.com/cesanta/mo… Mongoose user manual:https://www.cesanta.com/devel… Mngoose set up HTTP service #include <string> #include “mongoose.h” using namespace std; static const char *s_http_port = “8000”; static void ev_handler(mg_connection *nc, int ev, void *ev_data) { struct http_message *hm = (struct http_message *) ev_data; if (ev == MG_EV_HTTP_REQUEST) { std::string uri; if (hm->uri.p && hm->uri.p[0] == ‘/’) […]