Nginx — ngine x is a free, free, open source, high-performance HTTP server and reverse proxy server; it is also an IMAP, POP3, SMTP proxy server; nginx is famous for its high performance, stability, rich functions, simple configuration and low resource consumption.
In other words, nginx itself can host the website (similar to Tomcat), process HTTP services, and also act as a reverse proxy server 、 Load balancer and HTTP cache.
Nginx solves the problem of server’s C10K (that is, the number of clients connected in one second is 10K or 10000). Unlike traditional servers, which use threads to process requests, its design is a more advanced mechanism event driven mechanism, which is an asynchronous event driven structure.
2、 Features of nginx
- Cross platform: it can be compiled and run in most UNIX like systems. There is also a ported version of windows.
- Configuration is very simple: very simple, easy to use.
- Non blocking, high concurrency connection: the first stage of disk I / O is non blocking during data replication. The official test can support 50000 concurrent connections, and the actual production can run 20000-30000 concurrent connections (thanks to nginx’s latest epoll event processing model (message queue).
- There is no long connection between nginx proxy and back-end web server;
- Nginx receives user requests asynchronously, that is, it first receives all user requests, and then sends them to the back-end web server at one time, which greatly reduces the pressure of the back-end web server.
- When sending the response message, it receives the data from the back-end web server and sends it to the client.
- In theory, as long as it can ping, it can implement load balancing, and it can effectively distinguish the internal network traffic from the external network traffic.
- Support built-in server detection. Nginx can detect whether the server fails according to the status code and timeout information returned by the application server, and return the wrong request to other nodes in time.
- In addition, it has the characteristics of small memory consumption, low cost (much cheaper than F5 hardware load balancer), bandwidth saving and high stability.
3、 Overall architecture of nginx
1. Modular design
The worker process of nginx includes core and functional modules. The core module is responsible for maintaining a run loop and performing module functions in different stages of network request processing, such as storage read-write, content transmission, network read-write, out of office filtering, and sending requests to upstream servers. The modular design of its code also enables us to select and modify the functional modules according to the needs, and compile them into a server with specific functions.
2. Agent design
Proxy design can be said to be nginx’s deep-rooted design. No matter for HTTP, or for Memcache, redis, fastcgi and other network requests or responses, proxy mechanism is adopted in essence. Therefore, nginx is a high-performance proxy server by nature.
3. Event driven model
The asynchronous and non blocking event driven model is the key factor for nginx to achieve “high concurrency and high performance”. It also benefits from the adoption of “event notification” and I / O performance enhancement in Linux, Solaris and BSD like operating system kernels, such as kqueue, epoll and event ports.
4. Main process model
When nginx starts, two types of processes will be generated, one is the master process, and one or more worker processes. The main process does not process network requests, but is mainly responsible for scheduling work processes, namely the three items shown in the figure: load configuration, start work process and non-stop upgrade. So after starting nginx, looking at the process list of the operating system, we can see that there are at least two nginx processes.
5. Work process model
In UNIX like system, nginx can configure multiple workers, and each worker process can process thousands of network requests at the same time.
4、 Modular design of nginx
Highly modular design is the basis of nginx architecture. Nginx server is divided into several modules, each module is a functional module, only responsible for its own functions, modules strictly follow the principle of “high cohesion, low coupling”.
1. Core module
The core module is essential for the normal operation of nginx server, which provides the core functions such as error logging, configuration file parsing, event driven mechanism, process management and so on.
2. Standard HTTP module
The standard HTTP module provides functions related to HTTP protocol parsing, such as: port configuration 、 Web page coding settings 、 HTTP response header settings wait.
3. Optional HTTP module
The optional HTTP module is mainly used to extend the HTTP function of ﹣ standard, so that nginx can handle some special services, such as flash multimedia transmission, parsing geoip request, network transmission compression, Security Protocol SSL support, etc.
4. Mail service module
The mail service module is mainly used to support nginx’s mail service, including POP3 protocol, IMAP protocol and SMTP protocol.
5. Third party module
The third-party module is to expand the nginx server application and complete the developer’s custom functions, such as: JSON support, Lua support, etc.
5、 Forward proxy and reverse proxy in proxy design
First of all, the proxy server generally refers to the LAN internal machine through the proxy server to send requests to the server on the Internet, the proxy server generally acts on the client. For example: goagent wall climbing software. When our clients cross the wall, we use the forward proxy. Through the forward proxy, we run a software on our clients to forward our HTTP requests to other different servers to distribute the requests.
The reverse proxy server acts on the server. It receives the request from the client on the server, distributes the request to the specific server for processing, and then feeds back the corresponding result of the server to the client. Nginx is a reverse proxy software.
As can be seen from the above figure, the client must set up a forward proxy server. Of course, the premise is to know the IP address of the forward proxy server and the port of the proxy program.
Reverse proxy is just the opposite of forward proxy. For the client, the proxy server is just like the original server, and the client does not need any special settings. The client sends a normal request to the content in the reverse proxy’s name space. Then the reverse proxy will determine where to forward the request (the original server) and return the obtained content to the client.
6、 Nginx event driven model
In the asynchronous non blocking mechanism of nginx, the worker process processes other requests after calling io. When the IO call returns, the worker process will be notified. For such system calls, the event driven model of nginx server is mainly used.
As shown in the figure above, nginx’s The event driven model consists of three basic units: event sender, event collector and event processor
- Event sender: responsible for sending IO events to event processor;
- Event collector: responsible for collecting various IO requests of worker process;
- Event processor: responsible for the response of various events 。
The event sender puts each request into a list of pending events and calls the event processor to process the request in non blocking I / O mode. Its processing method is called “multiplexing method”, which includes the following three kinds: select model, poll model and epoll model.
7、 Processing of nginx requests
Nginx is a high performance web server, which can handle a large number of concurrent requests at the same time. It combines multi process mechanism and asynchronous mechanism. Asynchronous mechanism uses asynchronous non blocking mode. Next, let’s introduce nginx’s multi thread mechanism and asynchronous non blocking mechanism.
1. Multi process mechanism
Every time the server receives a client, it has Server master process (master process) generate a The worker process comes out to establish a connection with the client for interaction. Until the connection is broken, the sub process ends.
The advantage of using process is that each process is independent and does not need to lock, which reduces the impact of using lock on performance, reduces the complexity of programming and reduces the development cost. Secondly, independent processes can be used so that processes will not affect each other. If a process exits abnormally, other processes will work normally, and the master process will start a new worker process quickly to ensure that the service will not be interrupted, so as to minimize the risk.
The disadvantage is that when the operating system generates a subprocess, it needs to do some operations such as memory copy, which will cause some overhead in resources and time. When there are a large number of requests, the system performance will be degraded.
2. Asynchronous non blocking mechanism
Each worker process can process more than one client request using asynchronous non blocking mode.
When a “work process” receives a request from the client, it calls IO to process it. If it can’t get the result immediately, it will process other requests (i.e. “non blocking”), while the “client” doesn’t need to wait for a response during this period, it can process other things (i.e. “asynchronous”).
When IO returns, it will notify this work process; the process will be notified to temporarily suspend the current transaction to respond to the client request.
8、 Nginx process processing model
Nginx server uses master / worker multi process mode. The process of multithreading startup and execution is as follows:
- After the main program master process starts, it receives and processes external signals through a for loop;
- The main process generates worker subprocesses through fork() function, and each worker subprocess executes a for loop to realize the receiving and processing of events by nginx server.
It is generally recommended that the number of worker processes should be the same as the number of CPU cores, so that there is no large number of sub process generation and management tasks, and the cost of competing CPU resources and process switching between processes is avoided. Moreover, in order to make better use of the multi-core feature of nginx, we provide CPU affinity binding options. We can bind a process to a core, so that cache failure will not be caused by process switching.
For each request, there is and only one worker process processing it. First, each worker process comes from the master process fork. In the master process, set up the socket (listen FD) that needs to listen, and then fork out multiple worker processes.
The listenfd of all worker processes will become readable when a new connection arrives. To ensure that only one process processes the connection, all worker processes preempt accept before registering the listenfd read event_ Mutex, the process that grabs the mutex registers the listen read event, and calls accept in the read event to accept the connection.
When a worker process accepts the connection, it begins to read the request, parse the request, process the request, generate data, return it to the client, and finally disconnect. Such a complete request is like this. We can see that a request is completely handled by the worker process, and only handled in one worker process.
In the running process of nginx server, the main process and work process need process interaction. The interaction depends on the pipe implemented by socket.
1. The main process interacts with the worker process
This pipeline is different from the ordinary pipeline, it is a one-way pipeline from the main process to the work process, including the instructions sent by the main process to the work process, the work process ID, etc.; at the same time, the main process communicates with the outside world through signals; each sub process has the ability to receive signals and process corresponding events.
2. Workflow and workflow interaction
This kind of interaction is basically consistent with the main process work process interaction, but it will be completed indirectly through the main process. Work processes are isolated from each other, so when work process W1 needs to send instructions to work process W2, it first finds the source of W2 Process ID, and then write the correct instruction to the Access. W2 receives the signal and takes corresponding measures.
Through this article, we have an overall understanding of the overall architecture of nginx server. Including its modular design, multi process and asynchronous non blocking request processing mode, event driven model and so on. Through these theoretical knowledge, we can better understand the design idea of nginx. It’s very helpful for us to learn nginx.