Nginx knowledge that front-end engineers must know


Nginx knowledge that front-end engineers must know


Taste: tiger skin egg

Cooking time: 10min

This article has been included in the front-end warehouseGithub to the canteen. If you think the food and wine are delicious, a star is a great encouragement to the canteen owner.

historical background

The globalization of the Internet has led to the rapid growth of the amount of data on the Internet. Coupled with the failure of Moore’s law on a single core CPU at the beginning of this century, the CPU is developing towards multi-core. Obviously, Apache has not made many preparations for the core architecture. One process can only process one connection at a time, and can only process the next request after processing one request, This undoubtedly cannot cope with the large number of Internet users in Shanghai today. Moreover, the cost of inter process switching is very high. In this context, nginx came into being, which can easily handle millions and tens of millions of connections.

Nginx advantage

  • High concurrency and high performance
  • Good scalability
  • high reliability
  • Hot deployment
  • free use

Main application scenarios of nginx

  • Static resource service, which provides services through the local file system
  • Reverse proxy service, load balancing
  • API service and permission control to reduce application server pressure

Nginx configuration files and directories

adoptrpm -ql nginxYou can view the configuration files and directories of the nginx installation.

The figure below shows the configuration file and directory of the latest stable version of nginx installed on XX cloud.

  • /Etc / nginx / nginx.conf core configuration file
  • /Etc / nginx / conf.d/default.conf default HTTP server configuration file
  • /etc/nginx/fastcgi_ Params fastcgi configuration
  • /etc/nginx/scgi_ Params SCGI configuration
  • /etc/nginx/uwsgi_ Params uwsgi configuration
  • /etc/nginx/koi-utf
  • /etc/nginx/koi-win
  • /The three files, etc / nginx / win UTF, are code mapping files because the authors are Russian
  • /Etc / nginx / mime.types file for setting the correspondence between content type and extension of HTTP protocol
  • /usr/lib/systemd/system/nginx-debug.service
  • /usr/lib/systemd/system/nginx.service
  • /etc/sysconfig/nginx
  • /The four files, etc / sysconfig / nginx debug, are used to configure daemon management
  • /Etc / nginx / modules basic shared libraries and kernel modules
  • /Usr / share / Doc / nginx-1.18.0 help document
  • /Usr / share / Doc / nginx-1.18.0/copyright notice
  • /Usr / share / man / man8 / nginx.8.gz manual
  • /Cache directory of var / cache / nginx nginx
  • /Log directory of var / log / nginx nginx
  • /Usr / SBIN / nginx executable command
  • /Usr / SBIN / nginx debug debug executable command

The common commands and configuration file syntax of nginx can be easily found. This article will not repeat it. Let’s take a look at the configuration items that nginx can provide us in various scenarios from the functions and actual scenarios of nginx. Before that, let’s clarify two concepts:

Forward proxy

In one sentence, explain the forward proxy. The object of the forward proxy is the client, and the server can’t see the real client.

Resolver # Google's domain name resolution address
server {
    location / {
      #When the client requests me, I will forward the request to it
      # $http_ Host hostname to access $request_ Uri request path
      proxy_pass http://$http_host$request_uri;

Reverse proxy

Reverse proxy is explained in one sentence. The object of reverse proxy is the server, and the client can’t see the real server.

Cross domain

Cross domain is a scenario that front-end engineers will face. There are many cross domain solutions. However, you should know that in production, either CORS or nginx reverse proxy are used to solve cross domain problems. Configure the following in the configuration file of nginx:

server {
    listen   80;
    server_ name   localhost; #  Users access localhost and reverse proxy to
    location / {


Gzip is a very common data compression format on the Internet. For plain text, it can be compressed to 40% of the original size, which can save a lot of bandwidth. Note, however, that the minimum HTTP version required to enable gzip is 1.1.

location ~ .*\. (jpg|png|gif)$ {
    gzip off; # Turn off compression
    root /data/www/images;
location ~ .*\. (html|js|css)$ {
    gzip on; # Enable compression
    gzip_ min_ length 1k; #  Files over 1K are compressed
    gzip_ http_ version 1.1; #  The minimum HTTP version required to enable gzip compression
    gzip_ comp_ level 9; #  Compression level. The higher the compression ratio, the smaller the compressed volume of the file
    gzip_ types text/css application/javascript; #  Type of file to compress
    root /data/www/html;

Request limit

Malicious access to large traffic will cause a waste of bandwidth and increase pressure on the server. The number of connections and concurrency of the same IP are often limited.

There are two main types of request restrictions:

  • limit_ conn_ Module connection frequency limit
  • limit_ req_ Module request frequency limit
# $binary_ remote_ Addr remote IP address zone name 10m memory area size
limit_conn_zone $binary_remote_addr zone=coon_zone:10m;
server {
    # conn_ The shared memory area 1 corresponding to the zone setting is the limited number
    limit_conn conn_zone 1;
# $binary_ remote_ Addr remote IP address zone name 10m memory zone size rate is the request frequency 1s once
limit_req_zone $binary_remote_addr zone=req_zone:10m rate=1r/s;
server {
    location / {
        #Set the corresponding shared memory area burst maximum requests threshold nodelay. Requests that do not want to be exceeded are delayed
        limit_req zone=req_zone burst=5 nodelay;

access control

There are two main types of access control:

  • -http_ access_ Module IP based access control
  • -http_ auth_ basic_ Module login based on user trust

(login based on user’s trust is not very safe, so this article will not introduce the configuration.)

The following is IP based access control:

server {
    location ~ ^/index.html {
        #The matching index.html page can be accessed except
        allow all;

AB command

The full name of AB command is Apache bench. It is a stress testing tool built in Apache. It can also test other web servers such as nginx and IIS.

  • -N total requests
  • -C number of concurrent requests
ab -n 1000 -c 5000

Anti theft chain

The principle of anti-theft chain is to get the web page source according to the referer in the request header, so as to realize access control. This can prevent website resources from being illegally embezzled, so as to ensure information security, reduce bandwidth loss and reduce server pressure.

Location ~. * \. (jpg|png|gif) ${# matches the file type of the anti-theft chain resource
    #Pass valid_ Referers defines legal address whitelist $invalid_ Referer illegal return 403  
    valid_referers none blocked;
    if ($invalid_referer) {
        return 403;

Load BalanceLoad balance

When our website needs to solve the problem of high concurrency and massive data, we need to use load balancing to schedule the server. Distribute the request reasonably to a server in the application server cluster.

Nginx can provide us with load balancing capabilities. The specific configurations are as follows:

#Upstream specifies the back-end server address
#Weight set weight
#The server will http://webcanteen  Forward requests to the upstream pool
upstream webcanteen {
    server weight=10;
    server weight=1;
    server weight=1;
server {
    location / {
        proxy_pass http://webcanteen

Backend server status

The backend server supports the following state configurations:

  • Down: the current server does not participate in load balancing
  • Backup: the standby server when other nodes are unavailable
  • max_ Failures: the number of times a request is allowed to fail. If it arrives, it will sleep
  • fail_ Timeout: After Max_ The pause time of the server after failures. The default is 10s
  • max_ Conns: limit the maximum number of receive connections per server
upstream webcanteen {
    server down;
    server backup;
    server  max_fails=3 fail_timeout=10s;
    server max_conns=1000;

Distribution mode

  • Polling (default), each request is allocated to different back-end servers in turn in chronological order. If a back-end server goes down, the nginx polling list will automatically remove it.
  • Weight (weighted polling), an enhanced version of polling, is directly proportional to the access probability. It is mainly used in scenarios where the performance of back-end servers is uneven.
  • ip_ Hash: each request is allocated according to the hash result of the access IP, so that each access can access a back-end server.
  • url_ Hash, which allocates requests according to the hash result of the access URL, so that each URL is directed to the same back-end server. It is mainly used in the scenario when the back-end server is caching.
  • User defined hash, and load balancing of hash algorithm is realized based on any keyword as hash key
  • Fair, the requests are allocated according to the response time of the back-end server. If the response time is short, the requests are allocated first.