Nginx from the beginning to practice, Wanzi detailed explanation!

Time:2020-6-28

Nginx from the beginning to practice, Wanzi detailed explanation!

Recently, there are more and more scenes that need to be configured with reverse proxy. When you build your own blog, you will inevitably use nginx. So during this period of time, I concentrated on learning nginx and made some notes, hoping to help you

This article will install and use nginx in the CentOS environment. If you are not clear about the basic operation of CentOS, you can read the article “half an hour’s basic knowledge for getting started with CentOS”.

I believe that as a developer, we all know the importance of nginx. Let’s learn together.

CentOS version:7.6

Nginx version:1.16.1

1. About nginx

In the traditional web server, each client connection is treated as a separate process or thread. It needs to switch the CPU to a new task and create a new runtime context when switching tasks, which consumes extra memory and CPU time. When concurrent requests increase, the server responds slowly, which has a negative impact on performance.

Nginx from the beginning to practice, Wanzi detailed explanation!

Nginx is an open-source, high-performance, and highly reliable web and reverse proxy server, and supports hot deployment. It can run almost 7 * 24 hours without interruption, even if it runs for several months, it does not need to restart, and it can also update the software version without interruption. Performance is the most important consideration of nginx, which takes up less memory, has strong concurrency ability and can support up to 5W concurrent connections. Most importantly, nginx is free and can be commercialized, and its configuration and use are relatively simple.

The most important usage scenarios of nginx:

  1. Static resource service, which is provided through local file system;
  2. Reverse proxy service extends to cache, load balancing, etc;
  3. API service, openresty;

For the front end Node.js No stranger, nginx and Node.js Many of the concepts are similar, such as HTTP server, event driven, asynchronous non blocking, etc., and most of the functions of nginx are used Node.js It can also be implemented, but nginx and Node.js There is no conflict. They all have their own areas of expertise. Nginx is good at the processing of the underlying server-side resources (static resource processing and forwarding, reverse proxy, load balancing, etc.), Node.js They are better at handling the specific business logic of the upper layer. They can be combined perfectly to help the front-end development together.

Let’s focus on the use of nginx.

2. Related concepts

2.1 simple request and non simple request

First, let’s understand the simple request and the non simple request. If the following two conditions are met at the same time, it belongs to the simple request:

  1. The request method isHEADGETPOSTOne of the three;
  2. The HTTP header information does not exceed the following fields on the right:AcceptAccept-LanguageContent-LanguageLast-Event-ID
    Content-TypeThree values onlyapplication/x-www-form-urlencodedmultipart/form-datatext/plain

If these two conditions are not met at the same time, it is a non simple request.

Browsers handle simple and non simple requests differently:

Simple request

For simple requests, the browser will addOriginDirectly after the field,OriginField is used to indicate which source (protocol + domain name + port) the request comes from.

If the server findsOriginIf the specified source is not within the scope of permission, the server will return a normal HTTP response. After the browser gets the response, it finds that the header information of the response does not containAccess-Control-Allow-OriginField, an error is thrown to XHRerrorevent;

If the server findsOriginIf the specified domain name is within the scope of the license, the server will return several more responsesAccess-Control-Header information field at the beginning.

Non simple request

Non simple requests are those that have special requirements for the server. For example, the request method isPUTorDELETE, orContent-TypeThe value isapplication/json。 The browser will send an HTTP pre check before formal communicationOPTIONSTo request, first ask the server whether the domain name of the current web page is in the server’s license list, and which HTTP request methods and header information fields can be used. Only if you get a positive response will the browser send out a formalXHRRequest, otherwise report an error.

2.2 cross domain

The process of sending a request for data from the currently visited website on the browser to another website is as followsCross domain request

Cross domain is determined by the browser’s homology policy, which is an important browser security policy. It is used to restrict the interaction between the document of one origin or the script it loads and the resource of another source. It can help to block malicious documents, reduce the media that may be attacked, and can be lifted by CORS configuration.

There are a lot of explanations about cross domain Internet, so you can read MDN’s “browser’s homology strategy” document directly for further understanding. Here are a few examples of homology and different elements, which I believe programmers can understand.

#Homologous examples
		http://example.com/app1/index.html   #It's just a different path
		http://example.com/app2/index.html

		http://Example.com : 80 × case difference only
		http://example.com

		#Examples of different sources
		http://example.com/app1    #Different agreements
		https://example.com/app2

		http://example.com        #  Host is different
		http://www.example.com
		http://myapp.example.com

		http://example.com        #  Different ports
http://example.com:8080

2.3 forward agent and reverse agent

Reverse proxy corresponds to forward proxy. Their differences are as follows:

Forward agent:The general access process is that the client sends the request directly to the target server and obtains the content. After using the forward proxy, the client sends the request to the proxy server instead, and specifies the target server (the original server), and then the proxy server communicates with the original server, delivers the request and obtains the content, and returns it to the client. The forward proxy hides the real client, sends and receives requests for the client, making the real client invisible to the server;

For example, your browser can’t directly access Google. At this time, you can use a proxy server to help you access Google. Then this server is called forward proxy.

Reverse proxy:Compared with the general access process, after using the reverse proxy, the server that receives the request directly is the proxy server, and then forwards the request to the real server on the internal network for processing, and the result is returned to the client. The reverse proxy hides the real server and sends and receives requests for the server, making the real server invisible to the client. It is commonly used when processing cross domain requests. Now basically all large websites have reverse proxy.

For example, when you go to a restaurant for dinner, you can order Sichuan cuisine, Guangdong cuisine, Jiangsu and Zhejiang cuisine. There are three cooks in the restaurant. But as a customer, you don’t need to care which chef makes the dishes for you, just order them. Waiter assigns the dishes in your menu to different chefs for specific processing. Then the waiter is the reverse proxy server.

In short, generally speaking, the agents for clients are forward agents, and the agents for servers are reverse agents.

The main principle differences between forward agent and reverse agent can be seen in the following figure:

2.4 load balancing

In general, the client sends multiple requests to the server, and the server processes the requests. Some of them may need to operate some resources, such as database, static resources, etc. after the server finishes processing, the results will be returned to the client.

For the early system, this mode is not complex in function requirements, and can be competent with relatively few concurrent requests, and the cost is also low. With the continuous growth of information, the rapid growth of access and data, as well as the continuous increase of system business complexity, this approach has been unable to meet the requirements. When the amount of concurrency is particularly large, the server is prone to collapse.

Obviously, this is caused by the bottleneck of server performance. In addition to the heap machine, the most important way is load balancing.

In the case of explosive growth of requests, the performance of a single machine is no longer strong enough to meet the requirements. At this time, the concept of clustering has come into being. A single server can not solve the problem. Multiple servers can be used, and then requests can be distributed to each server, and the load can be distributed to different servers. This isload balancing The core is “pressure sharing”. Nginx implements load balancing, which generally refers to forwarding requests to the server cluster.

For example, when taking the subway in the evening rush hour, there is often a big loudspeaker at the entrance of the subway staff: “please go to entrance B, there are few people and no cars in entrance b…” the role of this staff is load balancing.

Nginx from the beginning to practice, Wanzi detailed explanation!

2.5 dynamic and static separation

In order to speed up the parsing speed of the website, dynamic pages and static pages can be parsed by different servers to speed up the parsing speed and reduce the pressure of the original single server.

Nginx from the beginning to practice, Wanzi detailed explanation!

Generally speaking, dynamic resources and static resources need to be separated. Because of the high concurrency and static resource caching of nginx, static resources are often deployed on nginx. If the request is a static resource, get the resource directly from the static resource directory. If it is a dynamic resource request, the reverse proxy principle is used to forward the request to the corresponding background application for processing, so as to achieve the dynamic and static separation.

After using the front-end and back-end separation, the access speed of static resources can be greatly improved. Even if the dynamic services are not available, the access of static resources will not be affected.

3. Nginx quick installation

3.1 installation

We can have a look first

yum list | grep nginx

Let’s see

Nginx from the beginning to practice, Wanzi detailed explanation!

then

yum install nginx

To install nginx, and then we’re on the command linenginx -vYou can see the specific nginx version information, and the installation is complete.

Nginx from the beginning to practice, Wanzi detailed explanation!

3.2 related folder

Then we can userpm -ql nginxTo see where nginx has been installed and what related directories are available, where/etcThe main files in the directory are configuration files, and some are shown in the following figure:

Nginx from the beginning to practice, Wanzi detailed explanation!

There are two main folders of interest:

  1. /etc/nginx/conf.d/Folder is the storage of configuration items for sub configuration,/etc/nginx/nginx.confThe main configuration file will import all the sub configuration items in this folder by default;
  2. /usr/share/nginx/html/Folder, usually static files are placed in this folder, you can also put other places according to your own habits;

3.3 running well

After installation, open nginx. If the system has a firewall enabled, you need to set the following ports to be opened in the firewall. Here are some common firewall operations (don’t worry about this if it’s not enabled):

Systemctl start firewall
		Systemctl stop firewall
		Systemctl status firewalld ා check whether the firewall is turned on and running is running
		Firewall CMD -- reload - restart the firewall. You need to reload to open the port permanently

		#Add the opening port, - Permanent means to open permanently, not to open temporarily, and it will fail after restarting
		firewall-cmd --permanent --zone=public --add-port=8888/tcp

		#View the firewall, and the added ports can also be seen
firewall-cmd --list-all

Then set the startup of nginx:

systemctl enable nginx

Start nginx (other commands will be explained in detail later):

systemctl start nginx

Then visit your IP, and you can see the welcome page of nginx ~Welcome to nginx! 👏

3.4 install NVM & node & Git

#Download NVM or visit the official website https://github.com/nvm-sh/nvm#install --update-script
		curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash

		After the installation of source ~ /. Bashrc, update the configuration file and use the NVM command
		NVM LS remote view remote node version
		NVM install v12.16.3 - select a version of the installation you want to install, I choose 12.16.3 here
		NVM list - view the installed node version after installation
		Node - V? Check if it is installed

Yum install git

4. Common commands for nginx operation

Nginx’s commands are entered in the consolenginx -hYou can see the complete command. Here are a few common commands:

Nginx - s reload? Sends a signal to the main process, reloads the configuration file, and hot restarts
		Nginx - s reopen restart nginx
		Nginx - s stop - fast close
		Nginx - s quit ා wait for the working process to finish processing before closing
		Nginx - t - view the final configuration of the current nginx
Nginx - t - C < configuration path > ා check whether there is a problem with the configuration. If it is already in the configuration directory, then - C is not required

systemctlIs a Linux system application management toolsystemdThe main command of is used to manage the system. We can also use it to manage nginx. The related commands are as follows:

Systemctl start nginx
		Systemctl stop nginx
		Systemctl restart nginx

		Systemctl reload nginx - reloads nginx to modify the configuration
		Systemctl enable nginx ා set boot to start nginx
		Systemctl disable nginx
Systemctl status nginx - view nginx operation status

5. Nginx configuration syntax

As shown in the figure above, the main configuration file of nginx is/etc/nginx/nginx.conf, you can usecat -n nginx.confTo view the configuration.

nginx.confThe structure chart can be summarized as follows:

Main - global configuration, effective for global
		├── events  
		#Configuration affects the nginx server or network connection to the user
		├── http    
		#Configuration of most functions such as agent, cache, log definition and configuration of third-party modules
		│   ├── upstream 
		#Configure the specific address of the back-end server, which is an integral part of the load balancing configuration
		│   ├── server   
		#Configure the relevant parameters of the virtual host. There can be multiple server blocks in an HTTP block
		│   ├── server
		│   │   ├── location  
		#The server block can contain multiple location blocks, and the location instruction is used to match the URI
		│   │   ├── location
		│   │   └── ...
		│   └── ...
└── ...

The structure of an nginx configuration file is likenginx.confAs shown, the syntax rules of the configuration file are:

  1. The configuration file consists of instruction and instruction block;
  2. Each instruction;At the end of semicolon, instructions and parameters are separated by space symbols;
  3. Instruction block to{}Braces organize multiple instructions together;
  4. includeStatement allows multiple configuration files to be combined to improve maintainability;
  5. use#Add notes to symbols to improve readability;
  6. use$Symbol using variable;
  7. The parameters of some instructions support regular expressions;

5.1 typical configuration

Typical configuration of nginx:

user  nginx;                        
		#Run user, the default is nginx, can not be set
		worker_processes  1;                
		#The number of nginx processes is generally set to be the same as the number of CPU cores
		error_log  /var/log/nginx/error.log warn;   
		#Nginx error log storage directory
		pid        /var/run/nginx.pid;      
		#PID storage location when nginx service starts

		events {
		use epoll;     
		#Use epoll's I / O model (if you don't know which polling method nginx should use, it will automatically choose the one that is most suitable for your operating system)
		worker_connections 1024;   
		#Maximum concurrent allowed per process
		}

		The most frequently used part of HTTP {ා configuration. Most functions such as proxy, cache, log definition and the configuration of the third-party module are set here
		#Set log mode
		log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
		'$status $body_bytes_sent "$http_referer" '
		'"$http_user_agent" "$http_x_forwarded_for"';

		access_log  /var/log/nginx/access.log  main;   
		#Nginx access log storage location

		sendfile            on;   
		#Turn on efficient transmission mode
		tcp_nopush          on;   
		#Reduce the number of network message segments
		tcp_nodelay         on;
		keepalive_timeout   65;   
		#Time to stay connected, also called timeout, in seconds
		types_hash_max_size 2048;

		include             /etc/nginx/mime.types;      
		#File extension and type mapping table
		default_type        application/octet-stream;   
		#Default file type

		include /etc/nginx/conf.d/*.conf;   
		#Load sub configuration items
    
		server {
		listen       80;       
		#Configure listening port
		server_name  localhost;    
		#Configured domain name
        
		location / {
		root   /usr/share/nginx/html;  
		#Site root
		index  index.html index.htm;   
		#Default home file
		deny 172.168.22.11;   
		#Forbidden IP address, which can be all
		allow 172.168.33.44; 
		#The IP address allowed to be accessed, which can be all
		}
        
		error_page 500 502 503 504 /50x.html;  
		#Default 50x corresponding access page
		error_page 400 404 error.html;   
		#Ditto
		}
}

The server block can contain multiple location blocks. The location instruction is used to match the URI. Syntax:

location [ = | ~ | ~* | ^~] uri {
		...
}

After command:

  1. =The exact matching path is used for URI without regular expression. If the matching is successful, no subsequent search will be carried out;
  2. ^~Before the URI without regular expression, it means that if the character after the symbol is the best match, the rule is adopted and no subsequent search will be carried out;
  3. ~Indicates that the regular matching path after the symbol is case sensitive;
  4. ~*Indicates that the regular after the symbol is used to match the path, which is not case sensitive. Follow~The priority is relatively low. If there are multiple location regularities that can match, the one with the longest regular expression will be used;

If the URI contains regular expressions, you must have~or~*Logo.

5.2 global variables

Nginx has some common global variables, which you can use anywhere you configure, as shown in the following table:

Global variable name function
$host In the request informationHost, if not in the requestHostLine, it is equal to the set server name, excluding the port
$request_method Client request type, such asGETPOST
$remote_addr Client’sIPaddress
$args Parameters in request
$arg_PARAMETER GETThe value of the variable name parameter parameter in the request, for example:$http_user_agent(UAER agent value),$http_referer
$content_length In request headerContent-lengthfield
$http_user_agent Client agent information
$http_cookie Client cookie information
$remote_addr IP address of the client
$remote_port Port of the client
$http_user_agent Client agent information
$server_protocol Protocol requested, such asHTTP/1.0HTTP/1.1
$server_addr server address
$server_name Server name
$server_port Port number of the server
$scheme HTTP method (such as HTTP, HTTPS)

There are more built-in predefined variables. You can search the keyword “nginx built-in predefined variables” directly. You can see a bunch of blogs writing this. These variables can be used directly in the configuration file.

6. Set secondary domain name virtual host

After the domain name is purchased in XX cloud, the virtual host can be configured. The general configuration path isDomain name management - > resolution - > Add recordAdd a secondary domain name in. After configuration, a cloud will resolve the secondary domain name to the server IP that we have configured. Then we can configure the access monitoring of the virtual machine on nginx to get the request from the secondary domain name.

Nginx from the beginning to practice, Wanzi detailed explanation!

Now I have a secondary domain name of Fe configured on my own server, that is, to access the Internetfe.sherlocked93.clubAt that time, we can also access our server.

Due to default profile/etc/nginx/nginx.confThere is a sentence in the HTTP module ofinclude /etc/nginx/conf.d/*.confin other wordsconf.dAll under folder*.confFiles are introduced into the configuration file as child configuration items. For maintenance convenience, I/etc/nginx/conf.dCreate a new folderfe.sherlocked93.club.conf

# /etc/nginx/conf.d/fe.sherlocked93.club.conf

		server {
		listen 80;
		server_name fe.sherlocked93.club;

		location / {
		root  /usr/share/nginx/html/fe;
		index index.html;
		}
}

And then/usr/share/nginx/htmlCreate a new Fe folder and file under the folderindex.html, write something casually and change itnginx -s reloadReload, browser inputfe.sherlocked93.club, found that you can access the newly created Fe folder from the secondary domain name:

Nginx from the beginning to practice, Wanzi detailed explanation!

7. Configure reverse agent

Reverse proxy is the most commonly used server function in work, which is often used to solve cross domain problems. Here is a brief introduction of how to implement reverse proxy.

First enter the main configuration file of nginx:

vim /etc/nginx/nginx.conf

Display the line number for convenience:set nu(personal habits) and then we gohttpModularserverIn blocklocation /, add a line to redirect the default web address to theproxy_passto configure:

Nginx from the beginning to practice, Wanzi detailed explanation!

Save and exit after modification,nginx -s reloadReload, enter the default web address, then directly jump to station B now, and realize a simple agent.

In practice, you can forward the request to another server on the local machine, or jump to a service on a different port according to the access path.

For example, we monitor9001Port, and then reverse proxy requests accessing different paths:

  1. Visithttp://127.0.0.1:9001/eduRequest forwarded tohttp://127.0.0.1:8080
  2. Visithttp://127.0.0.1:9001/vodRequest forwarded tohttp://127.0.0.1:8081

How to configure this? First, open the main configuration file, and then add a server block under the HTTP module:

server {
		listen 9001;
		server_name *.sherlocked93.club;

		location ~ /edu/ {
		proxy_pass http://127.0.0.1:8080;
		}
  
		location ~ /vod/ {
		proxy_pass http://127.0.0.1:8081;
		}
}

There are other instructions for reverse proxy, which can be learned as follows:

  1. proxy_set_header: before sending the client request to the back-end server, change the request header information from the client;
  2. proxy_connect_timeout: configure the timeout of the connection between nginx and the backend proxy server;
  3. proxy_read_timeout: configure nginx to wait for the corresponding timeout after sending a read request to the back-end server group;
  4. proxy_send_timeout: configure nginx to wait for the corresponding timeout after sending a write request to the backend server group;
  5. proxy_redirect: used to modify the location and refresh in the response header returned by the backend server.

8. Cross domain CORS configuration

As for the concepts of simple request, non simple request and cross domain, we have already introduced them before, and we can take a look at the previous explanation if we don’t understand them. Now the front-end and back-end projects are all under the control of the whole world. Often, the front-end services are started locally, and different back-end addresses need to be accessed, so cross domain problems are inevitable.

Nginx from the beginning to practice, Wanzi detailed explanation!

To solve the cross domain problem, let’s create a cross domain problem. First, set up the secondary domain name in the same way as beforefe.sherlocked93.clubandbe.sherlocked93.clubThe secondary domain names refer to the addresses of the ECS. Although the corresponding IP addresses are the same, the secondary domain names refer to the addresses of the ECSfe.sherlocked93.clubRequest access from domain namebe.sherlocked93.clubThe domain name request is still cross domain, because the hosts accessed are inconsistent (if you don’t know why, please refer to the previous cross domain content).

8.1 using reverse proxy to solve cross domain problems

The front end service address isfe.sherlocked93.clubPage requests forbe.sherlocked93.clubCross domain caused by back-end services of can be configured as follows:

server {
		listen 9001;
		server_name fe.sherlocked93.club;

		location / {
		proxy_pass be.sherlocked93.club;
		}
}

In this way, the previous domain namefe.sherlocked93.clubAll of our requests have been proxiedbe.sherlocked93.club, the front-end requests are all proxied to the back-end address by our server, bypassing the cross domain.

Here, both the static file request and the backend service request are based on thefe.sherlocked93.clubAt the beginning, it is not easy to distinguish, so in order to achieve unified forwarding of backend service requests, we usually agree to add/apis/Prefixes or other paths are used to distinguish static resource requests. At this time, we can configure them as follows:

#Request cross domain, contract agent backend service request path starts with / APIs /
		location ^~/apis/ {
		#Here, the request is rewritten, the path of the first group in the regular matching is spliced after the real request, and the subsequent matching is stopped with break
		rewrite ^/apis/(.*)$ /$1 break;
		proxy_pass be.sherlocked93.club;
  
		#Cookie delivery and write back between two domain names
		proxy_cookie_domain be.sherlocked93.club fe.sherlocked93.club;
}

In this way, we use static resourcesfe.sherlocked93.club/xx.html, dynamic resources we usefe.sherlocked93.club/apis/getAwo, the front-end server that the browser page still looks to visit bypasses the browser’s homology policy, after all, we don’t look cross domain.

It can also be unified to directly forward the front and back server addresses to another serverserver.sherlocked93.club, only the path added later can distinguish whether the request is a static resource or a back-end service, depending on the requirements.

8.2 configure the header to solve cross domain problems

When a browser accesses a cross source server, it can also set nginx directly on a cross domain server, so that the front end can be developed senselessly without changing the address that actually accesses the back end to the address of the front-end service, so the adaptability is higher.

For example, the front-end site isfe.sherlocked93.club, front page request under this addressbe.sherlocked93.clubNext resources, such as the formerfe.sherlocked93.club/index.htmlThe content is as follows:

<html>
		<body>
		<h1>welcome fe.sherlocked93.club!!<h1>
		<script type='text/javascript'>
		var xmlhttp = new XMLHttpRequest()
		xmlhttp.open("GET", "http://be.sherlocked93.club/index.html", true);
		xmlhttp.send();
		</script>
		</body>
</html>

Open browser accessfe.sherlocked93.club/index.htmlThe results are as follows:

Nginx from the beginning to practice, Wanzi detailed explanation!

It’s clear that this is a cross domain request, accessed directly in the browserhttp://be.sherlocked93.club/index.htmlIt can be accessed, but in thefe.sherlocked93.clubHTML page access will appear cross domain.

stay/etc/nginx/conf.d/Create a new profile in the folder, corresponding to the secondary domain namebe.sherlocked93.club

# /etc/nginx/conf.d/be.sherlocked93.club.conf

		server {
		listen       80;
		server_name  be.sherlocked93.club;
  
		add_header 'Access-Control-Allow-Origin' $http_origin;   
		#The global variable gets the origin of the current request. Requests with cookies are not supported*
		add_header 'Access-Control-Allow-Credentials' 'true';    
		#True to bring cookies
		add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';  
		#Allow request method
		add_header 'Access-Control-Allow-Headers' $http_access_control_request_headers;  
		#Allow the requested header, which can be*
		add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
    
		if ($request_method = 'OPTIONS') {
		add_header 'Access-Control-Max-Age' 1728000;   
		#The validity period of options request. During the validity period, there is no need to issue another pre inspection request
		add_header 'Content-Type' 'text/plain; charset=utf-8';
		add_header 'Content-Length' 0;
    
		return 204;                  
		#200 is OK
		}
  
		location / {
		root  /usr/share/nginx/html/be;
		index index.html;
		}
}

thennginx -s reloadReload the configuration. Then visitfe.sherlocked93.club/index.htmlThe result is as follows: the header we just configured appears in the request:

Nginx from the beginning to practice, Wanzi detailed explanation!

Solved cross domain problems.

9. Enable gzip compression

Gzip is a commonly used web page compression technology. The transmitted web page passes through gzip After compression, the size can usually become half or even smaller (as the official website said), and the smaller web page volume means the saving of bandwidth and the improvement of transmission speed, especially for the large-scale websites with huge visits, the reduction of each static resource volume will bring considerable traffic and bandwidth savings.

Baidu can find a lot of testing sites to check whether the target web page has gzip compression enabled, and find a “web page gzip compression detection” input Nuggets belowjuejin.imLet’s have a peek to see if gzip has been opened.

Nginx from the beginning to practice, Wanzi detailed explanation!

It can be seen here that the Nuggets have opened gzip, and the compression effect is quite good, reaching 52% of the original34kbThe size of the web page, compressed only need16kbIt can be imagined that the transmission speed of web pages has increased a lot.

9.1 nginx configuration gzip

Using gzip requires not only the nginx configuration, but also the browser side cooperation, which needs to be included in the request headerAccept-Encoding: gzip(all browsers after IE5 support it, which is the default setting of modern browsers.). In general, when requesting static resources such as HTML and CSS, the supported browsers will addAccept-Encoding: gzipThis header indicates that it supports gzip compression. When nginx gets this request, if there is a corresponding configuration, it will return the gzip compressed file to the browser, and add thecontent-encoding: gzipTo tell the browser the compression method it adopts (because the browser generally tells the server that it supports several compression methods when it passes them to the server). After the browser gets the compressed file, it analyzes it according to its own decompression method.

First, let’s see how nginx performs gzip configuration. As before, for the convenience of management, it is still in the/etc/nginx/conf.d/New profile in foldergzip.conf

# /etc/nginx/conf.d/gzip.conf

		gzip on; 
		#Off by default, open gzip or not
		gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

		#The above two can be started basically, and the following can understand if they are willing to toss
		gzip_static on;
		gzip_proxied any;
		gzip_vary on;
		gzip_comp_level 6;
		gzip_buffers 16 8k;
		# gzip_min_length 1k;
gzip_http_version 1.1;

A little explanation:

  1. gzip_types: to use gzip compressed mime file type, where text / html is forced to be enabled by the system;
  2. gzip_static: off by default. After the module is enabled, nginx first checks whether there is a GZ ending file requesting a static file. If so, it directly returns the.gzDocument content;
  3. gzip_proxied: off by default, enabled when nginx is used as the reverse proxy, which is used to enable or disable gzip compression of corresponding content received from the proxy server;
  4. gzip_vary: used to add in the response headerVary:Accept-Encoding, make the proxy serverAccept-EncodingIdentify whether gzip compression is enabled;
  5. gzip_comp_level: gzip compression ratio, compression level is 1-9, 1 compression level is the lowest, 9 is the highest, the higher the level is, the larger the compression rate is, the longer the compression time is, 4-6 is recommended;
  6. gzip_buffers: how much memory is obtained for cache compression results, 168 K is obtained in 8K * 16;
  7. gzip_min_length: the minimum number of bytes of the page allowed to be compressed. The number of bytes of the page is from theContent-LengthGet from. The default value is 0, no matter how many pages are compressed. It is recommended to set the number of bytes larger than 1K, which may increase the pressure;
  8. gzip_http_version: 1.1 by default, the minimum HTTP version required to enable gzip;

This configuration can be inserted into the configuration of the entire server of the HTTP module, or into the configuration of the virtual host to be usedserverOr the followinglocationModule, of course, if we write like above, it is included into the HTTP module.

Other more complete configuration information can be viewed in the official website document NGX_ http_ gzip_ Module >. Before configuration, it is as follows:

Nginx from the beginning to practice, Wanzi detailed explanation!

After configuration, there is one more response headerContent-Encoding: gzip, the return information is compressed:

Nginx from the beginning to practice, Wanzi detailed explanation!

Note that the general gzip configuration recommendations addgzip_min_length 1kOtherwise:

Nginx from the beginning to practice, Wanzi detailed explanation!

Because the file is too small, gzip is optimized by – 48% volume after compression. After compression, the volume is larger than before, so the best setting is lower than1kbDo not gzip

9.2 gzip configuration of webpack

When the current project is packaged with webpack, you can also enable gzip compression:

//Vue-cli3's vue.config.js  file
		const CompressionWebpackPlugin = require('compression-webpack-plugin')

		module.exports = {
		//Gzip configuration
		configureWebpack: config => {
		if (process.env.NODE_ENV === 'production') {
		//Production environment
		return {
		plugins: [new CompressionWebpackPlugin({
		Test: / \. JS $| \. HTML $| \. CSS /, // matching file name
		Threshold: 10240, // the threshold value of file compression. Compress those exceeding 10K
		Deleteoriginalassets: false // delete source file or not
		})]
		}
		}
		},
		...
}

The packed file is as follows:

Nginx from the beginning to practice, Wanzi detailed explanation!

Here you can see that there is a corresponding.gzaftergzipCompressed file, because this file exceeds10kb, some files do not exceed10kbIt didn’t happengzipPackage. If you want the volume threshold of compressed files to be smaller, you cancompression-webpack-pluginThis plug-in is configured accordingly.

So why does nginx already have gzip compression, and webpack has a whole gzip? Because if you use nginx to compress files, it will consume computing resources of the servergzip_comp_levelIf the configuration is relatively high, it will increase the cost of the server, and increase the request time of the client correspondingly, which is not worth the loss.

If the compression action is done at the time of front-end packaging, and the packed high-level compressed files are placed on the server as static resources, nginx will first find these compressed files and return them to the client, which is equivalent to giving the compressed file action from nginx to webpack in advance When packing, it saves server resources, so it is recommended to use webpack to configure gzip compression in the production environment.

10. Configure load balancing

The main idea of load balancing is to distribute the load evenly and reasonably to multiple servers to achieve the purpose of pressure diversion.

The main configuration is as follows:

http {
		upstream myserver {
		# ip_hash;  
		# ip_ Hash mode
		# fair;   
		#Fair mode
		server 127.0.0.1:8081;  
		#Load balancing destination service address
		server 127.0.0.1:8080;
		server 127.0.0.1:8082 weight=10;  
		#Weight mode, do not write the default is 1
		}
 
		server {
		location / {
		proxy_pass http://myserver;
		proxy_connect_timeout 10;
		}
		}
}

Nginx provides several allocation methods. The default ispolling In turn. There are several ways of distribution:

  1. polling By default, each request is allocated to different back-end servers one by one in chronological order. If the back-end service hangs up, it can be automatically eliminated;
  2. weightThe higher the weight is, the higher the probability of being accessed is, which is used in the case of uneven performance of back-end servers;
  3. ip_hash, each request is allocated according to the hash result of the access IP, so that each visitor accesses a back-end server, which can solve the problem of dynamic web page session sharing. Every request of load balancing will be relocated to one of the server clusters. Then the user who has logged in to one server will be relocated to another server, and the login information will be lost, which is obviously inappropriate;
  4. fair(the third party) is allocated according to the response time of the back-end server. The short response time is preferred. It depends on the third-party plug-in nginx upstream fair, which needs to be installed first;

11. Configure dynamic and static separation

Dynamic static separation has also been introduced before, which is to separate dynamic and static requests. There are two main ways. One is to separate static files into separate domain names and put them on independent servers, which is also the current mainstream respected scheme. Another way is to mix dynamic and static files and publish them separately through nginx configuration.

Different requests are forwarded by specifying different suffixes through location. By setting the expires parameter, you can make the browser cache expire time and reduce the previous requests and traffic with the server. Specific expires definition: it is to set an expiration time for a resource, that is to say, it does not need to go to the server for verification, and it can directly confirm whether it is expired through the browser itself, so no additional traffic will be generated. This approach is well suited to resources that are not constantly changing. (if the file is updated frequently, it is not recommended to use expires to cache). I set 3D here, which means that I can access the URL and send a request within the three days, which is not different from the last update time of the file on the server. It will not be retrieved from the server, and the status code 304 will be returned. If there is any modification, it will be downloaded directly from the server, and the status code 200 will be returned.

server {
		location /www/ {
		root /data/;
		index index.html index.htm;
		}
  
		location /image/ {
		root /data/;
		autoindex on;
		}
}

12. Configure high availability cluster (dual hot standby)

When the primary nginx server goes down, switch to the backup nginx server

Nginx from the beginning to practice, Wanzi detailed explanation!

First, install keepalived,

yum install keepalived -y

Then edit/etc/keepalived/keepalived.confProfile, and addvrrp_scriptDefine a peripheral detection mechanism, andvrrp_instanceMedium pass definitiontrack_scriptTo track the script execution process and realize node transfer:

global_defs{
		notification_email {
		[email protected]
		}
		notification_email_from [email protected]
		smtp_server 127.0.0.1
		smtp_ connect_ Timeout 30 // it's all email configuration. It's useless
		router_ id LVS_ Devel // the name of the current server. Use the hostname command to view it
		}
		vrrp_ script chk_ Maintain {// the script name of the detection mechanism is chk_ maintainace
		Script "[[- E / etc / maintained / down]] & & Exit 1 | exit 0" // can be script path or script command
		// script "/etc/keepalived/nginx_ check.sh "// such as script path
		Interval 2 // check every 2 seconds
		Weight - 20 // when the script is executed, change the priority of the current server to - 20
		}
		vrrp_ instanceVI_ 1 {// every VRRP_ Instance is to define a virtual router
		State master // the host is master and the standby is backup
		Interface eth0 // the name of the network card can be found in ifconfig
		virtual_ router_ ID 51 // the ID number of the virtual route, which is generally less than 255, and the ID of the active and standby machines should be the same
		Priority 100 // the priority of the master is higher than that of the backup
		advert_ Int 1 // default heartbeat interval
		Authentication {// authentication mechanism
		auth_type PASS
		auth_ Pass 1111 // password
		}
		virtual_ IPAddress {// virtual address VIP
		172.16.2.8
		}
}

Where detection scriptnginx_check.sh, here is a:

#!/bin/bash
		A=`ps -C nginx --no-header | wc -l`
		if [ $A -eq 0 ];then
		/Usr / SBIN / nginx - attempt to restart nginx
		Sleep 2 ා sleep 2 seconds
		if [ `ps -C nginx --no-header | wc -l` -eq 0 ];then
		Kill keepalived - failed to start. Kill the keepalived service. Drift VIP to other backup nodes
		fi
fi

Copy one copy to the backup server, and backup the configuration of nginxstateChanged toBACKUPpriorityIt is smaller than the host.

After setting, eachservice keepalived startStart up. After the access is successful, you can stop the maintained master machine. At this time, the master machine is no longer the host machineservice keepalived stopTo see whether it can automatically switch to the standby machine when accessing the virtual IPip addr

Start the master’s kept alive again, and the VIP changes to the host again.

13. Adapt to PC or mobile device

Different types of sites are returned according to different user devices. In the past, we used to use a pure front-end adaptive layout, but it’s better to write them separately no matter in terms of complexity and ease of use. For example, Taobao and Jingdong, which are common websites, are not adaptive, but are made separately according to the user’s requestuser-agentTo determine whether to return to PC or H5 site.

First of all/usr/share/nginx/htmlUnder foldermkdirCreate two folders separatelyPCandmobilevimEdit twoindex.htmlWrite something.

cd /usr/share/nginx/html
		mkdir pc mobile
		cd pc
		vim  index.html    #Write something like Hello PC!
		cd ../mobile
vim  index.html    #Write something like Hello mobile!

Then, just like when setting up the secondary domain name virtual host, go to/etc/nginx/conf.dCreate a new profile under folderfe.sherlocked93.club.conf

# /etc/nginx/conf.d/fe.sherlocked93.club.conf

		server {
		listen 80;
		server_name fe.sherlocked93.club;

		location / {
		root  /usr/share/nginx/html/pc;
		if ($http_user_agent ~* '(Android|webOS|iPhone|iPod|BlackBerry)') {
		root /usr/share/nginx/html/mobile;
		}
		index index.html;
		}
}

The configuration is basically the same. There is mainly one moreifStatement, and then use the$http_user_agentGlobal variables to determine the user requesteduser-agent, point to a different root path, and return to the corresponding site.

Visit this site in the browser, and then simulate mobile access in F 12:

Nginx from the beginning to practice, Wanzi detailed explanation!

It can be seen that when simulating mobile access, the site returned by nginx becomes the corresponding HTML of mobile.

14. Configure HTTPS

There are a lot of specific configurations on the Internet. You can also use the XXX cloud you purchased. Generally, there will be a free application server certificate. For installation, please refer to the operation guide of the cloud.

The free certificate issued by the Asian integrity organization provided by Tencent cloud that I purchased can only be used by one domain name. For secondary domain names, you need to apply for it separately, but the application approval is relatively fast, and it can be successful in a few minutes. Then download the compressed file of the certificate, which contains a nginx folderxxx.crtandxxx.keyCopy the file to the server directory, and then configure the following:

server {
		listen 443 ssl http2 default_server;   
		#SSL access port number is 443
		server_name sherlocked93.club;         
		#Fill in the domain name of the binding certificate

		ssl_certificate /etc/nginx/https/1_sherlocked93.club_bundle.crt;   
		#Certificate file address
		ssl_certificate_key /etc/nginx/https/2_sherlocked93.club.key;      
		#Private key file address
		ssl_session_timeout 10m;

		ssl_protocols TLSv1 TLSv1.1 TLSv1.2;      
		#Please configure according to the following protocol
		ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; 
		ssl_prefer_server_ciphers on;
  
		location / {
		root         /usr/share/nginx/html;
		index        index.html index.htm;
		}
}

finish writing sth.nginx -t -qCheck it out. No problemnginx -s reload, now visithttps://sherlocked93.club/You can visit the HTTPS version of the website.

In general, several security enhancing commands can be added:

add_ Header X-FRAME-OPTIONS deny; reduce click hijacking
		add_ Header x-content-type-options nosniff; prevent the server from automatically resolving resource types
add_ Header x-xss-protection 1; anti XSS attack

15. Some common skills

15.1 static services

server {
		listen       80;
		server_name  static.sherlocked93.club;
		charset utf-8;    
		#Prevent Chinese file name from being confused

		location /download {
		alias              /usr/share/nginx/html/static;  
		#Static resource directory
    
		autoindex               on;    
		#Open static resource column directory
		autoindex_exact_size    off;   
		#On (default) shows the exact size of the file, in bytes; off shows the approximate size of the file, in KB, MB, GB
		autoindex_localtime     off;   
		#The file time displayed when off (default) is GMT time; the file time displayed on is server time
		}
}

15.2 photo anti theft chain

server {
		listen       80;        
		server_name  *.sherlocked93.club;
  
		#Photo anti theft chain
		location ~* \.(gif|jpg|jpeg|png|bmp|swf)$ {
		valid_referers none blocked server_names ~\.google\. ~\.baidu\.  *.qq.com;  
		#Only the external IP link of the machine is allowed to be quoted. Thanks for @ mufachuan's reminder, Baidu and Google will also be added to the white list
		if ($invalid_referer){
		return 403;
		}
		}
}

15.3 request filtering

#All unspecified requests return 403
		if ( $request_method !~ ^(GET|POST|HEAD)$ ) {
		return 403;
		}

		location / {
		#IP access restriction (only IP is 192.168.0.2)
		allow 192.168.0.2;
		deny all;
  
		root   html;
		index  index.html index.htm;
}

15.4 configure image, font and other static file cache

Because static files such as pictures, fonts, audio, video and so on usually add a hash when they are packed, so the cache can be set a little longer. First, set the forced cache, and then set the negotiation cache. If there are static files without hash value, it is recommended not to set the forced cache, and only through the negotiation cache to determine whether to use the cache.

#Picture cache time settings
		location ~ .*\.(css|js|jpg|png|gif|swf|woff|woff2|eot|svg|ttf|otf|mp3|m4a|aac|txt)$ {
		expires 10d;
		}

		#If you do not want to cache
expires -1;

15.5 single page project history routing configuration

server {
		listen       80;
		server_name  fe.sherlocked93.club;
  
		location / {
		root       /usr/share/nginx/html/dist;  
		#Vue packed folder
		index      index.html index.htm;
		try_files  $uri $uri/ /index.html @rewrites;  
    
		expires -1;                          
		#The homepage generally does not force caching
		add_header Cache-Control no-cache;
		}
  
		#Interface forwarding, if required
		#location ~ ^/api {
		#  proxy_pass http://be.sherlocked93.club;
		#}
  
		location @rewrites {
		rewrite ^(.+)$ /index.html break;
		}
}

15.6 HTTP request forwarding to HTTPS

After HTTPS is configured, the browser can still access the HTTP addresshttp://sherlocked93.club/You can make a 301 jump to redirect the HTTP request of the corresponding domain name to HTTPS

server {
		listen      80;
		server_name www.sherlocked93.club;

		#Single domain redirection
		if ($host = 'www.sherlocked93.club'){
		return 301 https://www.sherlocked93.club$request_uri;
		}
		#Global non HTTPS redirection
		if ($scheme != 'https') {
		return 301 https://$server_name$request_uri;
		}

		#Or redirect all
		return 301 https://$server_name$request_uri;

		#You can select what you need for the above configuration without adding all
}

15.7 pan domain name path separation

This is a very practical skill. Sometimes we may need to configure some secondary or tertiary domain names to point to the corresponding directory automatically through nginx, such as:

  1. test1.doc.sherlocked93.clubAuto point/usr/share/nginx/html/doc/test1Server address;
  2. test2.doc.sherlocked93.clubAuto point/usr/share/nginx/html/doc/test2Server address;
server {
		listen       80;
		server_name  ~^([\w-]+)\.doc\.sherlocked93\.club$;

		root /usr/share/nginx/html/doc/$1;
}

15.8 pan domain name forwarding

Similar to the previous functions, sometimes we want to rewrite the secondary or tertiary domain name link to the path we want, so that the backend can resolve different rules according to the route:

  1. test1.serv.sherlocked93.club/api?name=aAuto forward to127.0.0.1:8080/test1/api?name=a
  2. test2.serv.sherlocked93.club/api?name=aAuto forward to127.0.0.1:8080/test2/api?name=a
server {
		listen       80;
		server_name ~^([\w-]+)\.serv\.sherlocked93\.club$;

		location / {
		proxy_set_header        X-Real-IP $remote_addr;
		proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header        Host $http_host;
		proxy_set_header        X-NginX-Proxy true;
		proxy_pass              http://127.0.0.1:8080/$1$request_uri;
		}
}

16. Best practices

  1. To make the nginx configuration easier to maintain, it is recommended to create a separate configuration file for each service, stored in the/etc/nginx/conf.dDirectory, you can create as many independent configuration files as you want.
  2. Independent configuration file, the following naming convention is recommended< Service >. ConfFor example, the domain name issherlocked93.club, so your profile should look like this/etc/nginx/conf.d/sherlocked93.club.confIf you deploy multiple services, you can also add the port number forwarded by nginx to the file name, such assherlocked93.club.8080.conf, if it is a secondary domain name, it is recommended to addfe.sherlocked93.club.conf
  3. Common configurations with high reuse frequency can be placed in/etc/nginx/snippetsFolder, the location to be used in the configuration file of nginx is included, named by function, and the main function and introduction location are indicated at the beginning of each snippet configuration file for easy management. Like beforegzipcorsAnd other common configurations, I have set up snippet.
  4. Nginx log related directory, withDomain name type.logNaming (e.gbe.sherlocked93.club.access.logandbe.sherlocked93.club.error.log)Located at/var/log/nginx/In the directory, configure different access rights and error log files for each independent service, so that it is more convenient and quick to find errors.

Recommended Today

What are the new methods of visual + map technology?

Last week, Ren Xiaofeng, chief scientist of Alibaba Gaode map, made a technical exchange with you on the development of computer vision related technology and the application in the field of map travel at the online live broadcast activity of “cloud dialogue” between senior students of Alibaba. The interaction between live broadcast is hot. Especially […]