Installation and configuration of varnish cache server under Linux

Time:2020-2-17

Varnish is a high performance and open source reverse proxy server and HTTP accelerator. Compared with the traditional squid, varnish has many advantages such as higher performance, faster speed and more convenient management. Poul Henning Kamp is one of the kernel developers of FreeBSD. With a new software architecture, varnish works closely with today’s hardware submissions. In 1975, there were only two storage media: memory and hard disk. But now the main memory of computer system includes L1, L2 in CPU, even L3 cache. The hard disk also has its own cache device, so the squid cache’s self-processing object replacement architecture can’t know these situations and optimize them, but the operating system can know these situations, so this part of the work should be handled by the operating system, which is the varnish cache design architecture.
Verdens Gang (http://www.vg.no), the largest online newspaper in Norway, has replaced 12 squids with three varnishs. Its performance is even better than before. This is the most successful application case of varnish.

Features of varnish:
1. Cache based on memory, data will disappear after restart
2. Good I / O performance with virtual memory
3. Support to set 0-60 seconds precise cache time
4. VCL configuration management is flexible
The maximum cache file size on a 5.32-bit machine is 2G
6. It has powerful management functions, such as top, stat, admin, list, etc
7. Smart setting and clear structure of state machine
8. Using binary heap to manage cache files can achieve the purpose of active deletion

Comparison between varnish and squid
Squid is a high-performance proxy cache server. There are many similarities and differences between squid and varnish, as follows:
Same point:
Is a reverse proxy server
Open source software
The difference is also the advantage of varnish:
The stability of varnish is very high. When both of them finish the work of the same load, the probability of squid server failure is higher than that of varnish, because squid requires frequent restart.
Varnish has faster access speed. It adopts “visual page cache” technology. All cached data is read directly from memory, while squid is read from hard disk. Therefore, varnish has faster access speed.
Varnish can support more concurrent connections because its TCP connection release is faster than squid. Therefore, more TCP connections can be supported in the case of high concurrent connections.
Varnish can use regular expression to clear part of the cache in batches through the management port, but squid can’t.
Squid belongs to single process using single core CPU, but varnish uses fork to open multiple processes for processing, so it is reasonable to use all cores to process the corresponding requests.
Of course, compared with the traditional squid, varnish also has disadvantages, as follows:
Once the varnish process is suspended, crashed or restarted, the cache data will be completely released from memory. At this time, all requests will be sent to the back-end server. In the case of high concurrency, it will cause great pressure on the back-end server.
In the use of varnish, if the request of a single URL requests a different varnish server through HA / F5 (load balancing), the requested varnish server will be penetrated to the back-end, and the same request will be cached on multiple servers, which will also cause the resource waste of the cache of varnish and performance degradation.
Solution:
To sum up, it is recommended to use the memory cache mode of varnish to start when the traffic is large, and multiple squid servers need to be followed. In order to prevent the previous varnish service and server from being restarted, there must be a lot of penetration in the early stage so that squid can act as the second level cache, and also make up for the problem that the varnish cache will be released when it is restarted in memory.
Such a problem can be solved by making URL hash on load balancing, and making a single URL request fixed to a varnish server.

The workflow of varnish
1. Communication between processes
Varnish starts or has two processes: the master (Management) process and the child (worker) process. The master reads in the storage configuration command, initializes it, and then forks to monitor the child. Child allocates threads for cache work, and child also manages threads and generates many worker threads.
During the initialization of the main thread of the child process, the large storage file is loaded into the memory as a whole. If the file exceeds the virtual memory of the system, the size of the original configuration MMAP will be reduced, and then the load will continue. At this time, the idle storage structure will be created and initialized, placed in the struct of the storage management, waiting for allocation.
Then a thread responsible for the new HTTP connection of the interface starts to wait for the user. If there is a new HTTP connection, but this thread is only responsible for receiving, then wake up the work thread in the waiting thread pool to process the request.
After the worker thread reads the URI, it will find the existing object, and the hit will return directly. If there is no hit, it will take it out of the back-end server and put it into the cache. If the cache is full, the old object will be released according to the LRU algorithm. For releasing the cache, a timeout thread will detect the life cycle of all objects in the cache. If the cache expires (TTL), delete it and release the corresponding storage memory.
2. Communication between configuration file structures
2016411132937298.jpg (864×1085)

Varnish installation

Copy code

The code is as follows:

wget http://ftp.cs.stanford.edu/pub/exim/pcre/pcre-8.33.tar.gz
tar xzf pcre-8.33.tar.gz
cd pcre-8.33
./configure
make && make install
cd ../

The error of varnish-3.0.4 is as follows:
varnishadm.c:48:33: error: editline/readline.h: No such file or directory
varnishadm.c: In function ‘cli_write’:
varnishadm.c:76: warning: implicit declaration of function ‘rl_callback_handler_remove’
varnishadm.c:76: warning: nested extern declaration of ‘rl_callback_handler_remove’
varnishadm.c: In function ‘send_line’:
varnishadm.c:179: warning: implicit declaration of function ‘add_history’
varnishadm.c:179: warning: nested extern declaration of ‘add_history’
varnishadm.c: In function ‘varnishadm_completion’:
varnishadm.c:216: warning: implicit declaration of function ‘rl_completion_matches’
varnishadm.c:216: warning: nested extern declaration of ‘rl_completion_matches’
varnishadm.c:216: warning: assignment makes pointer from integer without a cast
varnishadm.c: In function ‘pass’:
varnishadm.c:233: error: ‘rl_already_prompted’ undeclared (first use in this function)
varnishadm.c:233: error: (Each undeclared identifier is reported only once
varnishadm.c:233: error: for each function it appears in.)
varnishadm.c:235: warning: implicit declaration of function ‘rl_callback_handler_install’
varnishadm.c:235: warning: nested extern declaration of ‘rl_callback_handler_install’
varnishadm.c:239: error: ‘rl_attempted_completion_function’ undeclared (first use in this function)
varnishadm.c:300: warning: implicit declaration of function ‘rl_forced_update_display’
varnishadm.c:300: warning: nested extern declaration of ‘rl_forced_update_display’
varnishadm.c:303: warning: implicit declaration of function ‘rl_callback_read_char’
varnishadm.c:303: warning: nested extern declaration of ‘rl_callback_read_char’
make[3]: *** [varnishadm-varnishadm.o] Error 1
make[3]: Leaving directory `/root/lnmp/src/varnish-3.0.4/bin/varnishadm’
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/lnmp/src/varnish-3.0.4/bin’
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/lnmp/src/varnish-3.0.4′
make: *** [all] Error 2
No solution is found for error reporting. Select varnish-3.0.3

Copy code

The code is as follows:

wget http://repo.varnish-cache.org/source/varnish-3.0.3.tar.gz
tar xzf varnish-3.0.3.tar.gz
cd varnish-3.0.3
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
./configure –prefix=/usr/local/varnish –enable-debugging-symbols –enable-developer-warnings –enable-dependency-tracking –with-jemalloc
make && make install
/usr/bin/install -m 755 ./redhat/varnish.initrc /etc/init.d/varnish
/usr/bin/install -m 644 ./redhat/varnish.sysconfig /etc/sysconfig/varnish
/usr/bin/install -m 755 ./redhat/varnish_reload_vcl /usr/local/varnish/bin
useradd -M -s /sbin/nologin varnish
Copy code

The code is as follows:

ln -s /usr/local/varnish/sbin/varnishd /usr/sbin/
ln -s /usr/local/varnish/bin/varnish_reload_vcl /usr/bin/
ln -s /usr/local/varnish/bin/varnishadm /usr/bin/
Copy code

The code is as follows:

chkconfig –add varnish
chkconfig varnish on

To generate a varnish management key:

Copy code

The code is as follows:

uuidgen > /usr/local/varnish/etc/varnish/secret
chmod 644 /usr/local/varnish/etc/varnish/secret

To modify the varnish startup configuration:

Copy code

The code is as follows:

sed -i “[email protected]^VARNISH_VCL_CONF=/etc/varnish/[email protected]#VARNISH_VCL_CONF=/etc/varnish/default.vcl\nVARNISH_VCL_CONF=/usr/local/varnish/etc/varnish/linux[email protected]” /etc/sysconfig/varnish
sed -i “[email protected]^[email protected]#VARNISH_LISTEN_PORT=6081\[email protected]” /etc/sysconfig/varnish
sed -i “[email protected]^VARNISH_SECRET_FILE=/etc/varnish/[email protected]#VARNISH_SECRET_FILE=/etc/varnish/secret\nVARNISH_SECRET_FILE=/usr/local/varnish/etc/varnish/[email protected]” /etc/sysconfig/varnish
sed -i “[email protected]^VARNISH_STORAGE_FILE=/var/lib/varnish/[email protected]#VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin\nVARNISH_STORAGE_FILE=/usr/local/varnish/var/[email protected]” /etc/sysconfig/varnish
sed -i “[email protected]^VARNISH_STORAGE_SIZE.*@[email protected]” /etc/sysconfig/varnish
sed -i “[email protected]^VARNISH_STORAGE=.*@VARNISH_STORAGE=\”malloc,\${VARNISH_STORAGE_SIZE}\”@” /etc/sysconfig/varnish

If your server has multiple logical processors, you can also set the following settings:
/In etc / sysconfig / varnish, you can also add user-defined parameters in the way of “- P parameter”, such as:
DAEMON_OPTS=”-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
             -f ${VARNISH_VCL_CONF} \
             -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
             -t ${VARNISH_TTL} \
             -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \
             -u varnish -g varnish \
             -S ${VARNISH_SECRET_FILE} \
             -s ${VARNISH_STORAGE} \
– P thread? Pools = 2 “? Here is the add item
After varnish is started, it enters the background operation and returns to the command line status. It should be noted that the varnish runtime will start two processes at the same time, one is the main process and the other is the sub process. If there is a problem with the sub process, the main process will regenerate a sub process.

VCL configuration

Copy code

The code is as follows:

/usr/local/varnish/etc/varnish/linuxeye.vcl
#A backend host named webserver is defined through backend, “. Host” specifies the IP address or domain name of the backend host, “. Port” specifies the service port of the backend host.
backend webserver {
.host = “127.0.0.1”;
.port = “8080”;
}
#Call VCL? Recv to start
sub vcl_recv {
if (req.restarts == 0) {
if (req.http.x-forwarded-for) {
set req.http.X-Forwarded-For =
req.http.X-Forwarded-For + “, ” + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
}
#If the request type is not get, head, put, post, trace, options, delete, enter pipe mode. Note that this is the relationship of “& &”
if (req.request != “GET” &&
req.request != “HEAD” &&
req.request != “PUT” &&
req.request != “POST” &&
req.request != “TRACE” &&
req.request != “OPTIONS” &&
req.request != “DELETE”) {
return (pipe);
}
#Enter pass mode if the request type is not get or head
if (req.request != “GET” && req.request != “HEAD”) {
return (pass);
}
if (req.http.Authorization || req.http.Cookie) {
return (pass);
}<strong></strong># cache and speed up the domain name of linuxeye.com, which is a concept of universal domain name. That is to say, all domain names ending in linuxeye.com are cached
if (req.http.host ~ “^(.*).linuxeye.com”) {
set req.backend = webserver;
}
#For URLs ending in. JSP,. Do, PHP, and with?, the content is read directly from the back-end server
if (req.url ~ “\.(jsp|do|php)($|\?)”) {
return (pass);
} else {
return (lookup);
}
}</p>
<p>sub vcl_pipe {
return (pipe);
}</p>
<p>sub vcl_pass {
return (pass);
}</p>
<p>sub vcl_hash {
hash_data(req.url);
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
return (hash);
}</p>
<p>sub vcl_hit {
return (deliver);
}</p>
<p>sub vcl_miss {
return (fetch);
}
Copy code

The code is as follows:

#If the request type is get, and the URL of the request contains upload, then cache it. The cache time is 300 seconds, or 5 minutes
sub vcl_fetch {
if (req.request == “GET” && req.url ~ “^/upload(.*)$”) {
set beresp.ttl = 300s;
}</p>
<p> if (req.request == “GET” && req.url ~ “\.(png|gif|jpg|jpeg|bmp|swf|css|js|html|htm|xsl|xml|pdf|ppt|doc|docx|chm|rar|zip|ico|mp3|mp4|rmvb|ogg|mov|avi|wmv|txt)$”) {
unset beresp.http.set-cookie;
set beresp.ttl = 30d;
}
return (deliver);
}
Copy code

The code is as follows:

#Here is to add a header ID to determine whether the cache is hit
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = “HIT from demo.linuxeye.com”;
} else {
set resp.http.X-Cache = “MISS from demo.linuxeye.com”;
}
return (deliver);
}
Copy code

The code is as follows:

#You can customize an error page with VCL? Error
sub vcl_error {
set obj.http.Content-Type = “text/html; charset=utf-8”;
set obj.http.Retry-After = “5”;
synthetic {“
<?xml version=”1.0″ encoding=”utf-8″?>
<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Strict//EN”
“http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd”>
<html>
<head>
<title>”} + obj.status + ” ” + obj.response + {“</title>
</head>
<body>
<h1>Error “} + obj.status + ” ” + obj.response + {“</h1>
<p>”} + obj.response + {“</p>
<h3>Guru Meditation:</h3>
<p>XID: “} + req.xid + {“</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
“};
return (deliver);
}</p>
<p>sub vcl_init {
return (ok);
}</p>
<p>sub vcl_fini {
return (ok);
}

Check that the VCL configuration is correct:

Copy code

The code is as follows:

service varnish configtest

or

Copy code

The code is as follows:

varnishd -C -f /usr/local/varnish/etc/varnish/linuxeye.vcl

Start varnish:

Copy code

The code is as follows:

service varnish start

To view the varnish status:

Copy code

The code is as follows:

service varnish status

Dynamic load VCL configuration:

Copy code

The code is as follows:

service varnish reload

Stop varnish:

Copy code

The code is as follows:

service varnish stop

To view the 80 ports currently being monitored by varnish:

Copy code

The code is as follows:

# netstat -tpln | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 15249/varnishd
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 19468/nginx
tcp 0 0 :::80 :::* LISTEN 15249/varnishd

To view the varnish process:

Copy code

The code is as follows:

# ps -ef | grep varnishd | grep -v grep
root 15248 1 0 11:47 ? 00:00:00 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /usr/local/varnish/etc/varnish/linuxeye.vcl -T 127.0.0.1:6082 -t 120 -w 50,1000,120 -u varnish -g varnish -S /usr/local/varnish/etc/varnish/secret -s malloc,150M
varnish 15249 15248 0 11:47 ? 00:00:00 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /usr/local/varnish/etc/varnish/linuxeye.vcl -T 127.0.0.1:6082 -t 120 -w 50,1000,120 -u varnish -g varnish -S /usr/local/varnish/etc/varnish/secret -s malloc,150M

Varnish access log
Varnishncsa can use NCSA common log format to record HTTP requests to log files

Copy code

The code is as follows:

/usr/bin/install -m 755 ./redhat/varnishncsa.initrc /etc/init.d/varnishncsa
chmod +x /etc/init.d/varnishncsa
chkconfig varnishncsa on
mkdir -p /usr/local/varnish/logs

Edit varnishncsa startup configuration

Copy code

The code is as follows:

ln -s /usr/local/varnish/bin/varnishncsa /usr/bin
sed -i ‘[email protected]^logfile.*@logfile=”/usr/local/varnish/logs/varnishncsa.log”@’ /etc/init.d/varnishncsa

To start varnishncsa:

Copy code

The code is as follows:

service varnishncsa start

Use logrotate to poll the log file (poll daily):

Copy code

The code is as follows:

cat > /etc/logrotate.d/varnish << EOF
/usr/local/varnish/logs/varnishncsa.log {
daily
rotate 5
missingok
dateext
compress
notifempty
sharedscripts
postrotate
[ -e /var/run/varnishncsa.pid ] && kill -USR1 \`cat /var/run/varnishncsa.pid\`
endscript
}
EOF

Log polling debug test:

Copy code

The code is as follows:

logrotate -df /etc/logrotate.d/varnish

Recommended Today

The application of USB camera in rk3399

The application of USB camera in rk3399 1, introduction UVCFull nameUSB Video Class, is a set of standard customized by usb-if. All USB interface cameras complying with this standard can almost be used directly under Windows Linux and other systems, achieving the similar effect of drive free. Of course, it doesn’t mean that there is […]