Application scenarios of nginx
There are three main application scenarios of nginx
- Static resource service
- Reverse proxy service
- API services
Static resource service
Nginx can provide services of static resources through local file system, such as pure static HTML pages.
Reverse proxy service
The operation efficiency of many application services is very low. QPS, TPS and concurrency are limited. Therefore, it is necessary to form a cluster of many application services to provide users with high availability services. At this time, the reverse proxy function of nginx is required, and the dynamic expansion of application services requires load balancing function. In addition, the nginx layer also needs to do caching. Therefore, the reverse proxy service mainly has three functions:
- Reverse proxy
- load balancing
Sometimes the application service itself has many performance problems, but the database service is much better than the application service. The business scenario is relatively simple, and the concurrency and TPS are much higher than the application service. Therefore, nginx can directly access the database or redis at this time, and can also use the powerful concurrency of nginx to realize the API service of the application firewall.
Nginx architecture foundation
Nginx state machine
When nginx provides external services, there are three main types of traffic that will reach nginx: Web, email and TCP traffic. When these three kinds of traffic arrive at nginx, they will be processed by transport layer state machine, application layer state machine and mail state machine respectively. When the memory is not enough to cache all the static resources, it will degenerate into blocked disk calls. At this time, a thread pool is needed to process it. For each processed request, the access log and error log are recorded, and the log is also recorded to the disk.
The process structure of nginx
Nginx has four processes:
- Master process. The master process is the parent process, and other processes are all child processes. The master process manages the worker process
- Worker process. There are multiple worker processes, which are responsible for handling specific requests. Why does nginx adopt a multi process rather than a multithreaded process structure? Because nginx needs to ensure high availability, multiple threads will share address space. When a third-party module causes a segment error, the entire nginx process will be hung up. The multi process model will not cause this problem
- Cache manager and cache loader processes. In addition to being used by more than one worker process, the cache also needs to be used by the cache process. The cache loader loads the cache and the cache manager manages the cache. In fact, the cache used by each request is still carried out by the worker process. These processes communicate with each other through shared memory
Why do worker processes need many?
This is because, after nginx adopts the event driven model, it expects that the worker process can occupy a CPU from the beginning to the end, which can make more efficient use of the whole CPU and improve the CPU cache hit rate. In addition, it can bind the worker process with a certain CPU core.
Using signals to manage nginx’s parent-child processes
We mentioned the command line of nginx. In fact, many signals of nginx are realized by sending signals to the master process.
The master process will monitor the worker process, and the monitoring is realized by sending chld signal to the parent process when the child process exits. In this way, when a bug occurs, the worker can be immediately pulled up.
The master process can receive the following signals:
- TERM, INT
The worker process can receive the following signals:
- TERM, INT
Signal corresponding to command line
There is no corresponding signal between usr2 and winch, which can only be sent through kill.
The difference between stop and quit is that one is to quit immediately and the other is to stop gracefully.
The truth of reload overloaded configuration files
- Send HUP signal to master process
- The master process checks the configuration file for syntax problems
- The master process opens a new listening port (if a new port is configured)
- The master process starts the worker process with a new configuration file
- The master process sends the quit signal to the old worker process
- The old worker process closes the listening handle and finishes processing the current connection
The truth of hot deployment
In the last article, we talked about the process of hot deployment. What is the specific process of hot deployment?
- Replace old nginx file with new nginx file (pay attention to backup)
- Send usr2 signal to master process
- The master process modifies the PID file name with the suffix. Oldbin
- The master process starts the new master process with the new nginx file
- Send the quit signal to the old master process to close the old master
- Roll back operation, send HUP signal to the old master process, and send quit to the new master process
Graceful shutdown of worker process
- Set timer worker_ shutdown_ timeout
- Close listening handle
- Close idle connections
- Recirculation wait for all connections to close
- Exit process
In this case, the function of the timer is to force the exit of the process if the time exceeds, but the connection has not been processed. In addition, nginx can only handle the graceful closing of HTTP. The proxy of websocket, TCP and UDP can’t do it, and the worker doesn’t parse the data.
The above is the whole process of nginx command line and signal. The next lesson starts with the HTTP module.
Pay attention to the official account and receive twenty sets of technical atlas.