Cache server varnish

Time:2019-12-12

Blog reference

http://www.178linux.com/76700
http://www.mamicode.com/info-detail-229142.html

About varnish

Varnish is a very lightweight and powerful application that provides caching services. It is a high-performance and open-source reverse proxy server and HTTP accelerator;
Cache server varnish

The configuration of varnish is realized through VCL cache policy tool, which is a simple programming language, and users can customize variables

There are several built-in functions and variables, but its functions do not support taking arguments and do not return values. Caching policies written in VCL are usually saved to. VCL files,

It needs to be compiled into binary format before it can be called by varnish. In fact, the whole caching strategy is composed of several specific subroutines, such as VCL ﹣ recv

VCL? Fetch and other components are executed in different locations (or times). If a subroutine is not defined for a location in advance, varnish will execute the default definition.

Before VCL policy is enabled, it will be converted into C code by management process, and then c code will be compiled into binary program by gcc compiler,

Therefore, the installation and operation of varnish depends on the GCC library. After compilation, management is responsible for connecting it to the varnish instance,

That is, subprocess. At compile time, the syntax will be checked for errors to avoid loading the VCL with the wrong syntax. Once the compilation is completed and there is no syntax error, it will be loaded,

At the same time, several configurations can be saved. When you think the previous configuration policy is more scientific, just call the previous configuration. As long as you call the configuration policy in the library, you can make the rule effective,

There is no need to restart or reload. So the cost of modifying the configuration policy is very small. The configured policy will be cleared only when the varnish is restarted. Of course, it can also be cleared manually,

This can be done using varnishadm’s vcl.discard command.

Varnish supports many different types of back-end storage, which can be specified with the – s option when varnishd starts. The types of back-end storage include:

(1) file: use a specific file to store all the cache data, and map the entire cache file to the memory area through the mmap() system call of the operating system (if the memory is large enough, the condition allows); you can specify the location, size, and cache allocation granularity, that is, the size of each allocation until the size is no longer allocated Ity]]]

(2) malloc: use the malloc() library call to apply to the operating system for the specified size of memory space to store the cache object when varnish starts; use the method malloc [, size]

(3) persistent (experimental): it has the same function as file, but it can store data for a long time (i.e. it will not be cleared when restarting the varnish data); it is still in the test period, and it is sad to have one that can be saved for a long time and is not stable;

There are two types of processes in varnish,

Management process (master process), whose responsibilities are as follows

1. Read in the configuration file

2. Call the appropriate storage type. Varnish supports writing the cache to disk

3. Create / read in a cache file of the corresponding size (but this function is still in the testing stage, so it is recommended not to use it for the time being)

4. Initialize management to associate the cache file structure space with the storage structure for allocation,

5. Fork sends out multiple idle subprocesses and monitors each child process

Working process (child subprocess)

1. Transfer the whole MMAP of the previously opened storage file to memory

2. Create and implement idle structures to store cache objects,

3. Many threads are responsible for the related work:

The main process fork subprocess, the main process waits for the signal of the subprocess, after the subprocess exits, the main process restarts the subprocess, and the subprocess generates several threads.

Accept thread: accept the request, assign the request to the idle subprocess, and let the idle work thread respond to the user’s request

Work thread: there are multiple work threads, which receive the request from the queue and process the request. After processing, continue to receive the next request for processing

When the work thread processes, it will read the URL of the request to determine whether there is a hit of the cache object in the local cache. If the hit directly builds the response message,

If not, go to the upstream server to find the data, cache it to the local and then build the response message response request

Epoll thread: a request processing is called a session. In the session cycle, after processing the request, it will be handed over to epoll for processing to monitor whether there are any events.

Expire thread: for cached objects, they are organized into a binary heap according to the expiration time. The thread periodically checks the root of the heap and processes expired files.

Relationship between threads:

Worker: processing user requests

  Accept: receive user request

When cache space runs out:

The cache space needs to be cleaned up. You can use LRU algorithm to clean up. (LRU refers to the least recently used)

Cache processing flow

Cache server varnish

What is web cache?

Web caching refers to the direct copy of a web resource (such as HTML page, picture, JS, data, etc.) with web server and client (browser). The cache will save the copy of the output content according to the incoming request; when the next request comes, if it is the same URL, the cache will decide whether to directly use the copy to respond to the access request or send the request to the source server again according to the cache mechanism. – – from alloy team

Cache hit rate type:

Document hit rate: measured by the number of documents

Byte hit rate: measured by the number of bytes hit

Cache processing flow:

Accept request: accept access request from client

Parse request: extract the header information in the URL of the client access request

Query Cache: query whether the cache data contains the request data accessed by the client according to the extracted header information

Freshness monitoring: if the cached data contains data accessed by the client, check the validity of the data

Create response message: when it is determined that the data hit by the cache is valid, the response message is created

Send response message: when the response message is completed, send the response message to the client

Record log: record log information while sending response message

Methods of freshness monitoring:

1. Expiration date or validity:

Http / 1.0: define the absolute time of cache expiration with exports

For example: expires: SAT, 18 Jul 2015 03:41:04 GMT

Http / 1.1: using cache control to define the maximum service life of a document is the relative length of time

For example: cache control: Max age 1D: cache validity is 1 day

2. Server revalidation: verify to the server whether to send changes to the data

1) if the original content does not change, the server only responds to the first part, without the body part, and the response code is 304;

2) if the original content changes, the response code will be 200;

3) if the original content no longer exists, the response code is 404, and the cache item should also be cleared;

3. Conditional request header:

If modified since: whether the original content has changed since the specified time

If none match: each version of the document has a tag Etag, which will also send changes when the content changes

Control cache capacity:

Server side: cache control

No store: do not cache;

No cache: it can be cached, but the freshness must be monitored before it is provided to the requester;

Must revalidate: it can be cached, but the freshness must be monitored before it is provided to the requester;

Max age: maximum service life

Expires: the absolute time of expiration

Client freshness limit: cache control

Max stale: maximum failure time

Max stale = < s >: Specifies the maximum expiration time

Min fresh = < s >: shortest effective time

Max age = < s >: maximum effective time

Note: it is better not to cache information including private, authentication, cookie, etc;

There are two types of threads in varnish:

management:

1) read in the configuration file

2) call appropriate type of storage (including malloc memory, TMP and persisten)

3) create / read in cache files of corresponding size

4) initialize management structure space

5) fork and monitor the child process

child/cache:

1) map open storage files into memory space

2) create and initialize free structure space

Varnish has nine state engines, as shown in the following figure:

Nine state engines of varnish

Cache server varnish

Introduction to the varnish configuration file:

1) definition of backend node:

backend name { } ;

Proxy cache: subprocess definition

Sub + state engine {};

There is a correlation between the engines. The former engine defines the exit state through return (x), and then decides to continue processing the next engine;

2)vcl:

VCL is a simple programming language based on “domain”. It supports arithmetic operation and logic operation, regular expression, set and unset custom variable or cancel variable, if condition judgment, built-in function and variable;

Configuration syntax:

① comment: / / single line comment/…../Multiline comment

② sub $name define function

③ cycle not supported

④ the termination statement return is supported. There is no return value

Domain specific

⑥ operators: = (assignment), = = (comparison), ~ (regular), (negate), & & (and), | (or)

3) built in function of VCL

Regsub (STR, regexp, sub): take regexp as the pattern to match STR, and replace the first matched with sub

  Regsuball (STR, regexp, sub): use regexp as the pattern to match STR, and replace all matched ones with sub

  Hash [date (STR): do hash calculation

  Purge: pick an object from the cache and delete it

  Return (x): define exit status
4) built in variables: 
![clipboard.png](/img/bVTJhn)
![clipboard.png](/img/bVTJhI)
#Profile:
·/Etc / varnish / varnish.params - configure the working characteristics of the varnish service process, such as listening address and port, caching mechanism;
·/Etc / varnish / default.vcl - configure the cache working attributes of each child / cache thread;

Configuration function

1. VCL · recv function

It is used to receive and process the request. When the request arrives and is successfully received, it is called to determine how to process the request by judging the requested data.
This function generally ends with the following keywords:
Pass: means to enter the pass mode and give the request control to the vcl_pass function.
Pipe: indicates entering pipe mode and giving control of the request to vcl_pipe function.

Lookup: means to enter the hash and give the request control to the VCL? Hash function
Error code [reason]: it means to return “code” to the client and give up processing the request. “Code” is the error ID, such as 200, 405, etc., “reason” is the error message.
2. Vcl_pipe function
This function is called when entering the pipe mode to pass the request directly to the back-end host. If the request and the returned content have not changed, the unchanged content will be returned to the client until the link is closed
This function generally ends with the following keywords:
error code [reason]
pipe
3. VCL · u pass function
This function is called when entering pass mode. It is used to pass the request directly to the back-end host. The back-end host answers the data and sends it to the client without any caching. Under the current connection, it returns the latest content each time. The keyword ends:
error code [reason]
pass
4、lookup
It means to look up the requested object in the cache, and give control to the function VCL? Hit or VCL? Miss according to the result of the search
5. Vcl_hit function
After the lookup instruction is executed, if the requested content is found in the cache, the function is called automatically
This function generally ends with the following keywords:
Deliver: means to send the found content to the client and give the control to the function vcl_deliver
error code [reason]
pass
6. VCL · Miss function
This function can be used to determine whether to retrieve the content from the back-end server after the lookup instruction is executed if the requested content is not found in the cache
This function generally ends with the following keywords:
Fetch: indicates to get the content of the request from the back end and give the control to the VCL feu fetch function
error code [reason]
Pass: when going to the back-end host to get data, do some extra operations
7. VCL? Fetch function
After invoking the cache from the back-end host and getting the content, the method is called. Then, by judging the content obtained, we decide whether to put the content in the cache or directly back to the client.
This function generally ends with the following keywords:
error code [reason]
Pass can not be cached
Delivery can also be cached
8. Vcl_deliver function
After the contents of the request are found in the cache, the method is called before sending to the client. This function generally ends with the following keywords:
error code [reason]
Deliver responds to client requests
9. VCL Ou timeout function
This function is called before the cache content expires. Generally, it ends with the following keywords:
Discard: clears the content from the cache.
Fetch can also go to the backend host to retrieve data
10. Vcl_discard function
When the cache content expires or the cache space is not enough, this method is called automatically, and it usually ends with the following keywords:
Keep: means to keep the content in the cache
Discard: clears the content from the cache.

Profile:

·/Etc / varnish / varnish.params - configure the working characteristics of the varnish service process, such as listening address and port, caching mechanism;
·/Etc / varnish / default.vcl - configure the cache working attributes of each child / cache thread;

1) installation

[[email protected] ~]# yum -y install varnish

2) configure the varnish service configuration file

[[email protected] ~]# vim /etc/sysconfig/varnish 
# Configuration file for varnish

Nfiles = 131072 \ \ defines the maximum number of files that can be opened

Memlock = 82000 \ \ defines how much memory space is used for log information. Note that the varnish log information is saved in memory
 
Nprocs = "unlimited" \ \ how many requests each thread responds to

#Daemon? Core? Limit = "unlimited" \ \ keep the default

Reload? VCL = 1 \ \ keep default

#This file contains 4 alternatives, please use only one

##Alternative 1, minimal configuration, no VCL \ \ method 1
#
# Listen on port 6081, administration on localhost:6082, and forward to
# content server on localhost:8080.  Use a fixed-size cache file.
#
#DAEMON_OPTS="-a :6081 \
#             -T localhost:6082 \
#             -b localhost:8080 \
#             -u varnish -g varnish \
#             -s file,/var/lib/varnish/varnish_storage.bin,1G"


##Alternative 2, configuration with VCL \ \ method 2
#
# Listen on port 6081, administration on localhost:6082, and forward to
# one content server selected by the vcl file, based on the request.  Use a
# fixed-size cache file.
#
#DAEMON_OPTS="-a :6081 \
#             -T localhost:6082 \
#             -f /etc/varnish/default.vcl \
#             -u varnish -g varnish \
#             -S /etc/varnish/secret \
#             -s file,/var/lib/varnish/varnish_storage.bin,1G"


##Alternative 3, advanced configuration \ \ method 3
#
# See varnishd(1) for more information.
#
# # Main configuration file. You probably want to change it ?
Varnish? VCL? Conf = / etc / varnish / test.vcl \ \ define the main configuration file
#
# # Default address and port to bind to
# # Blank address means all IPv4 and IPv6 interfaces, otherwise specify
# # a host name, an IPv4 dotted quad, or an IPv6 address in brackets.
# VARNISH_LISTEN_ADDRESS=
Varnish? Listen? Port = 80 \ \ defines the listening port, which is 6081 by default
#
# # Telnet admin interface listen address and port
Varnish? Admin? Listen? Address = 127.0.0.1 \ \ manage varnish listening address
Varnish? Admin? Listen? Port = 6082 \ \ manage varnish's listening port
#
# # Shared secret file for admin interface
Varnish \ secret \ file = / etc / varnish / secret \ \ varnish encrypted file
#
# # The minimum number of worker threads to start
Minimum number of processes
#
# # The Maximum number of worker threads to start
Varnish? Max? Threads = 1000 \ \ varnish maximum number of processes
#
# # Idle timeout for worker threads
Varnish? Read? Timeout = 120 \ \ varnish worker process timeout
#
# # Cache file location
Varnish ﹣ storage ﹣ file = / var / lib / varnish / varnish ﹣ storage.bin \ \ cache file storage type, which can be defined to use memory storage
#
# # Cache file size: in bytes, optionally using k / M / G / T suffix,
# # or in percentage of available disk space using the % suffix.
Vary storage size = 1g \ \ define storage size
#
# # Backend storage specification
Variable storage = "file, ${variable storage file}, ${variable storage size}" \ \ define storage properties
#
# # Default TTL used when the backend does not specify one
Varnish_ttl = 120 \ \ cache time
#
# # DAEMON_OPTS is used by the init script.  If you add or remove options, make
# # sure you update this section, too.
DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
             -f ${VARNISH_VCL_CONF} \
             -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
             -t ${VARNISH_TTL} \
             -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \
             -u varnish -g varnish \
             -S ${VARNISH_SECRET_FILE} \
             -s ${VARNISH_STORAGE}"
#

##Alternative 4, do it yourself. See varnish (1) for more information. \ \ method 4
#
# DAEMON_OPTS="

3) configure the varnish main configuration file and add the response message header

[[email protected] sysconfig]# cd /etc/varnish/
[[email protected] varnish]# cp default.vcl test.vcl
[[email protected] varnish]# vim test.vcl 
Backend default {\ \ define backend master
  . host = "172.16.2.14"; \ \ back end host address
  . port = "80"; \ \ back end host listening port
}
Sub VCL ﹣ delivery {\ \ defined in VCL ﹣ delivery state engine
     If (obj. Hits > 0) {\ \ if the number of cache hits is greater than 0
       Set resp.http.x-cache = "hit"; \ \ add the first x-cache of response; set the value to hit;
     } else {
       Set resp.http.x-cache = "Miss"; \ \ add the first x-cache of response; set the value to miss;
     }
     Return (deliver); \ \ defines the return status
}

Apply this configuration:

[[email protected] ~]# /etc/init.d/varnish start
Root @ varnish ~] (varnishadm - S / etc / varnish / secret - t 127.0.0.1:6082 \ \ enter the varnish management interface
200        
-----------------------------
Varnish Cache CLI 1.0
-----------------------------
Linux,2.6.32-431.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit
varnish-3.0.7 revision f544cd8

Type 'help' for command list.
Type 'quit' to close CLI session.

Varnish > vcl.load T1 / etc / varnish / test.vcl \ \ load configuration file
200        
VCL compiled.
Varnish > vcl.use T1 \ \ use profile
200

VCL status engine:

Built in variables:

·Req. *: request, indicating that the request message sent by the client is related;
·Bereq. *: the httpd request sent from varnish to be host is related;
·Beresp. *: the response message from be host to varnish is related;
·Resp. *: related to the client by the varnish response;
·Obj. *: Property of the cache object stored in the cache space; read only;

Common variables:

·bereq., req.

bereq.http.HEADERS
Bereq.request: request method;
Bereq.url: the URL of the request;
Bereq.proto: the requested protocol version;
Bereq.backend: indicates the backend host to call;
Req.http.cookie: the value of the cookie header in the request message of the client;
req.http.User-Agent ~ “chrome”

·beresp., resp.

beresp.http.HEADERS
Beresp.status: the status code of the response;
Reresp.proto: protocol version;
Beresp.backend.name: host name of be host;
Beresp.ttl: the remaining cacheable time of the content responded by the be host;

·obj.*

Obj.hits: the number of hits of this object from the cache;
Obj.ttl: TTL value of the object

·server.*

server.ip
server.hostname

·client.*

Interactive configuration

varnishadm
Sign in:

-S /etc/varnish/secret -T 127.0.0.1:80

Profile related:

VCL. List: list of state engines;
VCL. Load: load, load and compile;
VCL. Use: activation;
VCL. Discard: delete;
VCL. Show [- v] < configname >: to view the details of the specified configuration file, see the default configuration;

Runtime parameters:

Param.show - L: display list;
param.show <PARAM>
param.set <PARAM> <VALUE>

Cache storage:

storage.list

Back end server:

backend.list

Do not check cache for a request

Example:
vcl_recv {
    if (req.url ~ “(?i)^/(login|admin)”) {
    return(pass);
}
}

Deny some kind of request access

Example:
vcl_recv {
    if (req.http.User-Agent ~ “(?i)curl”) {
    return(synth(405));
}
}

For public resources, remove the private flag and set the cache time

Example:
if (beresp.http.cache-control !~ “s-maxage”) {
if (bereq.url ~ “(?i)\.(jpg|jpeg|png|gif|css|js)$”) {
    unset beresp.http.Set-Cookie;
    set beresp.ttl = 3600s;
}
}

Show backend host IP

Example:
if (req.restarts == 0) {
if (req.http.X-Fowarded-For) {
    set req.http.X-Forwarded-For = req.http.X-Forwarded-For + “,” + client.ip;
} else {
    set req.http.X-Forwarded-For = client.ip;
}
}

Clear cache based on purge request

Example:
sub vcl_recv {
    if (req.method == “PURGE”) {
    return(purge);
}
}

Set ACL access control

Example:
acl purgers {
“127.0.0.0”/8;
“10.1.0.0”/16;
}
sub vcl_recv {
    if (req.method == “PURGE”) {
    if (!client.ip ~ purgers) {
    return(synth(405,”Purging not allowed for ” + client.ip));
}
    return(purge);
}
}

Clear cache with ban command

Example:
ban req.url ~ ^/javascripts
ban req.url ~ /js$

Configuring multiple hosts on the back end of varnish

Example:
Import directors; import modules
backend server1 {
    .host = “172.16.42.2”;
    .port = “80”;
}
backend server2 {
    .host = “172.16.42.3”;
    .port = “80”;
}
sub vcl_init {
    new websrvs = directors.round_robin();
    websrvs.add_backend(server1);
    websrvs.add_backend(server2);
}
sub vcl_recv {
#Which set of servers to request
set req.backend_hint = websrvs.backend();
}

Varnish dynamic and static separation

Example:
backend default {
    .host = “172.16.42.10”;
    .port = “80”;
}
backend appsrv {
    .host = “172.16.42.2”;
    .port = “80”;
}
sub vcl_recv {
if (req.url ~ “(?i)\.php$”) {
    set req.backend_hint = appsrv;
} else {
    set req.backend_hint = default;
}
}

Health status detection for backend hosts

. probe: define the health status detection method;
. URL: the URL requested during detection. The default value is "/";
. request: the specific request sent;
. window: judge the health status based on the latest number of checks;
. threshhold: the number of times between the last check defined in. Window and the last one defined in. Threshhold is successful;
. interval: detection frequency;
. timeout: timeout duration;
. expected "response: the expected response code, which is 200 by default;
Example:
backend server1 {
    .host = “172.16.42.3”;
    .port = “80”;
    .probe = {
    . url = "/. Healthcheck. HTML" ා you need to create this test page first;
    .timeout= 1s;
    .interval= 2s;
    .window=5;
    .threshold=5;
    }
}

Performance optimization of varnish

·Thread_pools: it is better to be less than or equal to the number of CPU cores;
·Thread pool Max: the maximum number of threads per thread pool;
·Thread "pool" Min: the extra meaning is "the maximum number of idle threads";
·Thread? Pool? Timeout: thread timeout
·Thread? Pool? Add? Delay: newly created thread
·Thread? Pool? Destroy? Delay: delay time to kill idle threads
Setting method:
/Etc / varnish / varnish.params (permanent)
param.set
Example:
DAEMON_OPTS=”-p thread_pools=6 -p thread_pool_min=5 -p thread_pool_max=500 -p thread_pool_timeout=300″

Varnish log view

1、varnishstat – Varnish Cache statistics

-1
-1 -f FILED_NAME
-L: list of field names that can be specified by - f option;
MAIN.cache_hit
MAIN.cache_miss
Example:
varnishstat -1 -f MAIN.cache_hit -f MAIN.cache_miss
varnishstat -l -f MAIN -f MEMPOOL

2、varnishtop – Varnish log entry ranking

-1 Instead of a continously updated display, print the statistics once and exit.
-I taglist, you can use multiple - I options at the same time, or one option can keep up with multiple tags;
-I <[taglist:]regex>
-X taglist: exclusion list
-X <[taglist:]regex>

3、varnishlog – Display Varnish logs
4、varnishncsa – Display Varnish logs in Apache / NCSA combined log format

Recommended Today

Docker learning (5) basic command of dockerfile

To learn dockerfile, you need to understand its basic commands From – base image Try to use the official reference image [x] From Tomcat: 8.5.50-jdk8-openjdk 񖓿 make reference image (based on Tomcat: 8.5.50-jdk8-openjdk) [x] From CentOS ා make a base image based on CentOS: latest [x] From scratch? Does not depend on any reference image […]