Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Time:2021-8-31

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Nginx is a high-performance HTTP and reverse proxy server, which is characterized by low memory consumption and strong concurrency. In fact, the concurrency of nginx does perform better in the same type of web server.

Nginx is specially developed for performance optimization. Performance is its most important requirement. It pays great attention to efficiency. It is reported that nginx can support up to 50000 concurrent connections.

Structure diagram of nginx knowledge network

The structure diagram of nginx’s knowledge network is as follows:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Reverse proxy

Forward proxy: it is not feasible for computer users in LAN to directly access the network. They can only access it through proxy server. This proxy service is called forward proxy.

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Reverse proxy: the client cannot perceive the proxy because the client does not need to be configured to access the network. As long as the request is sent to the reverse proxy server, the reverse proxy server selects the target server to obtain data, and then returns to the client.

At this time, the reverse proxy server and the target server are external servers, exposing the proxy server address and hiding the real server IP address.

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

load balancing

The client sends multiple requests to the server. The server processes the requests. Some may interact with the database. After the server processes them, the results will be returned to the client.

The general request and response process is as follows:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

However, with the growth of the amount of information and the rapid growth of access and data, the ordinary architecture can not meet the current needs.

The first thing we think of is to upgrade the server configuration. Due to the increasing failure of Moore’s law, it is not advisable to improve the performance only from hardware. How to solve this demand?

We can increase the number of servers, build clusters, distribute requests to each server, and change the original situation of concentrating requests on a single server to distributing requests to multiple servers, that is, we call load balancing.

Graphic load balancing:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Assuming that 15 requests are sent to the proxy server, the proxy server distributes them evenly according to the number of servers, and each server processes 5 requests. This process is called load balancing.

Dynamic and static separation

In order to speed up the parsing speed of the website, dynamic pages and static pages can be handed over to different servers for parsing, so as to speed up the parsing speed and reduce the pressure from a single server.

State before dynamic and static separation:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

After dynamic and static separation:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

install

Reference link:

Nginx service introduction and installation

Nginx common commands

#View version:
./nginx -v
#Start:
./nginx
#Close (there are two methods. It is recommended to use. / nginx - s quit):
./nginx -s stop
./nginx -s quit
#Reload nginx configuration:
./nginx -s reload

Configuration file for nginx

configuration fileIt consists of three parts:

① Global block

From the configuration file to the events block, you mainly set some configuration instructions that affect the overall operation of the nginx server.

The larger the configuration value of concurrent processing service, the more concurrent processing capacity can be supported, but it will be restricted by hardware, software and other devices.

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

② Events block

This affects the network connection between the nginx server and the user. Common settings include whether to enable the serialization of network connections under multiple workprocesses, whether to allow multiple network connections to be received at the same time, and so on.

Maximum connections supported:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

③ HTTP block

Such as reverse proxy and load balancing are configured here.

location[ = | ~ | ~* | ^~] url{
}

The location instruction indicates that the syntax is used to match the URL. The syntax is as follows:

  • =: before using for URLs that do not contain regular expressions, it is required that the string and URL match strictly. If the match is successful, stop searching down and process the request.
  • ~: used to indicate that the URL contains regular expressions and is case sensitive.
  • ~*: used to indicate that the URL contains regular expressions and is case insensitive.
  • ^~: before using for URLs without regular expressions, the nginx server is required to find the location that represents the highest matching degree between the URL and the string, and immediately use this location to process the request without matching.
  • If a URL contains a regular expression, it does not need to have a ~ start ID.

Reverse agent practice

① Configure reverse proxy

Purpose: enter the address www.123.com in the browser address bar to jump to the Tomcat main page of Linux system.

② Concrete implementation

Configure Tomcat first. Because it is relatively simple, it will not be described here, and access it on Windows:
Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

The specific process is as follows:
Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Before modification:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

The configuration is as follows:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Visit again:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

③ Reverse proxy 2

Objectives:

Preparation: configure two Tomcat ports, 8080 and 8081, which can be accessed. You can modify the port configuration file.

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Add 8080 to the new file respectively!!! And 8081!!!

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!
Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Summary of reverse proxy

The first example: the browser accesses www.123.com and parses the server IP address from the host file
192.168.25.132 www.123.com。

Then, it accesses port 80 by default, and listens to port 80 through nginx and proxy to the local port 8080, so as to access www.123.com and finally forward it to Tomcat 8080.

Second example:
visithttp://192.168.25.132: 9001 / edu / jump directly to 192.168.25.132:8080
visithttp://192.168.25.132: 9001 / VOD / jump directly to 192.168.25.132:8081

In fact, it listens to port 9001 through nginx, and then selects whether to forward it to Tomcat of 8080 or 8081 through regular expression.

Load balancing practice

① Modify nginx.conf as shown below:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

② Restart nginx:

./nginx -s reload

③ Create a new edu folder and a.html file under the webapps folder of Tomcat in 8081, and fill in the content as 8081!!!!

④ Press enter in the address bar and it will be distributed to different Tomcat servers:
Leak detection and filling! Consolidate your nginx knowledge system, this is enough!
Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

The load balancing method is as follows:

  • Polling (default).
  • The higher the weight, the higher the priority.
  • Fair: the requests are allocated according to the response time of the back-end server, and those with short corresponding time are allocated first.
  • ip_ Hash. Each request is allocated according to the hash result of the access IP. In this way, each visitor can access a back-end server regularly, which can solve the session problem.

Actual combat of dynamic and static separation

What is dynamic static separation? Separating dynamic requests from static requests is not about the physical separation of dynamic pages and static pages. It can be understood that nginx handles static pages and Tomcat handles dynamic pages.

Dynamic and static separation can be roughly divided into two types:

  • It is also the mainstream scheme to separate static files into separate domain names and put them on an independent server.
  • Publish dynamic and static files together and separate them through nginx.

Dynamic and static separation diagram analysis:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Actual combat preparation, static file preparation:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Configure nginx as shown below:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Nginx high availability

If something goes wrong with nginx:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

terms of settlement:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Preliminary preparation:

  • Two nginx servers
  • Install keepalived
  • Virtual IP
#To install keepalived:
[[email protected] usr]# yum install keepalived -y
[[email protected] usr]# rpm -q -a keepalived
keepalived-1.3.5-16.el7.x86_64
#To modify a profile:
[[email protected] keepalived]# cd /etc/keepalived
[[email protected] keepalived]# vi keepalived.conf

Copy and paste the following configuration files respectively, overwrite kept.conf, and the virtual IP is 192.168.25.50.

The corresponding host IP needs to be modified:

  • smtp_ Server 192.168.25.147 (primary)
  • smtp_ Server 192.168.25.147 (standby)
  • State master state backup
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.25.147
   smtp_connect_timeout 30
   router_ id LVS_ Host address for detail # access
}
vrrp_script chk_nginx {
  script "/usr/local/src/nginx_ check.sh"   #  Address of detection file
  interval 2    #  Detect script execution interval
  weight 2    #  weight
}
vrrp_instance VI_1 {
    State backup # host master and standby backup
    Interface ens33 # network card
    virtual_ router_ id 51  #  The same group needs to be consistent
    priority 90   #  Access priority: the host value is large and the standby value is small
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.25.50   #  Virtual IP
    }
}

The startup command is as follows:

[[email protected] sbin]# systemctl start keepalived.service

Accessing virtual IP successfully:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Turn off nginx and keepalived of host 147 and find that it is still accessible.

Principle analysis

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

As shown in the figure below, a master and a worker are started. The master is the administrator and the worker is the process of specific work.

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

How does worker work? As shown below:

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Summary

The number of workers should be equal to the number of CPUs; Hot deployment can be used for multiple workers in a master. At the same time, workers are independent. Hanging one will not affect others.

Author: gradually warming °
Source: blog.csdn.net/yujing1314/article/details/107000737_

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!

Leak detection and filling! Consolidate your nginx knowledge system, this is enough!