Distributed fastdfs cluster deployment

Time:2019-12-1

FastDFS

Yu Qing, the author of fastdfs, describes it in his GitHub as follows: “fastdfs is an open source high performance distributed file system. It’s major functions include: file storing, file syncing and file accessing (file loading and file downloading), and it can resolve the high capacity and load balancing problem. Fastdfs should meet the requirement of The website white service based on files such as photo sharing site and video sharing site “. Its main functions include: file storage, file synchronization and file access (file upload and file download). It can solve the problem of high capacity and load balance. Fastdfs shall meet the service requirements of websites based on files such as photo sharing sites and video sharing sites.

Fastdfs has two roles: tracker and storage.TrackerResponsible for file access scheduling and load balancing.StorageThe function of storing files is file management, including file storage, file synchronization and providing file access interface. It also manages metadata, which is an attribute represented as a key value pair of a file.TrackerandStorageA node can be composed of one or more servers. These servers can be added or offline at any time without affecting online services. Of course, each node cluster needs at least one serviceRunning。 Note thatTrackerAll servers in the cluster are peer-to-peer (P2P), which can be increased or decreased at any time according to the pressure of servers.

In addition, the official website also explains the storage system in detail. In order to support large capacity,StorageThe nodes are organized in volumes (or groups). The storage system is composed of one or more volumes. The files between volumes are independent of each other. The file capacity of all volumes is the file capacity of the whole storage system. A volume can be composed of one or moreStorageServer composition: the files in the storage server under a volume are the same. Multiple storage servers in the volume play the role of redundant backup and load balancing. When a server is added to a volume, the system automatically synchronizes the existing files. After the synchronization, the system automatically switches the new server to online service. Volumes can be added dynamically when storage space is low or running out. Simply add one or more servers and configure them as a new volume, increasing the capacity of the storage system. The concept of volume or group should not be too deep for the time being, which will be explained in detail later in the installation and deployment.

In fastdfs, the identification of a file consists of two parts: volume name and file name.

Environmental description

  • Operating system: CentOS Linux release 7.2.1511
  • System disk: 274g
  • On disk: 3.7t * 12
  • CPU:32 (Intel(R) Xeon(R) )
  • Memory:8G

architecture design

![](https://img2018.cnblogs.com/blog/1844824/201910/1844824-20191024163710574-711006142.png)

Figure 1 DFS architecture

Working mode: the client sends a request to the tracker, and then the tracker obtains the source data from the storage node and returns it to the client. Then the client requests the storage node according to the source data.

! [highly available fasfdfs structure chart] (https://img2018.cnblogs.com/blog/1844824/201910/1844824-20191024163712649-1893121002. PNG)
Figure 2 high availability DFS architecture (red dotted line is discussed separately)

Concise summary

1. The core system has only two roles, tracker server and storage server.

2. All tracker servers are peer-to-peer, and there is no master slave relationship (in the figure, the tracker leader needs to be discussed separately, only need to understand that the tracker is P2P at present).

3. The storage servers are grouped, and the files on the storage servers in the same group are exactly the same.

4. Storage servers of different groups will not communicate with each other, and the same group will synchronize.

5. The storage server actively reports the status information to the tracker server, and each tracker server will record the information of the storage server.

6. If the trunk function is enabled, the tracker service coordinates the storage to select trunk server.

Cluster deployment

Preparation environment

Table 1 software list and version

Name Explain link
CentOS 7. X (installation system)
libfastcommon Some common function packages separated by fastdfs libfastcommon V1.0.39
FastDFS Fastdfs Ontology FastDFS V5.11
fastdfs-nginx-module Fastdfs and nginx correlation module, to solve the problem of synchronization delay within the group fastdfs-nginx-module V1.20
nginx Nginx 1.12.2 (the latest version that Yum can install under CentOS 7) nginx 1.12.2

Note: in Google browser, the file names downloaded by clicking the above link and copying the link are different. Google will rename it according to the above text information.

Table 2 server IP, service allocation and port planning

Name IP address application service port
A machine
10.58.10.136 tracker 22122
10.58.10.136 storage-group1 23000
10.58.10.136 storage-group2 33000
10.58.10.136 libfastcommon
10.58.10.136 nginx 8888
10.58.10.136 fastdfs-nginx-module
B machine
10.58.10.137 tracker 22122
10.58.10.137 storage-group1 23000
10.58.10.137 storage-group3 43000
10.58.10.137 libfastcommon
10.58.10.137 nginx 8888
10.58.10.137 fastdfs-nginx-module
C machine
10.58.10.138 tracker 22122
10.58.10.138 storage-group2 33000
10.58.10.138 storage-group23 43000
10.58.10.138 libfastcommon
10.58.10.138 nginx 8888
10.58.10.138 fastdfs-nginx-module

Before installation, the following must be stated:

1. Remember to grant read-write permission to the following storage directories (logs, data, PID files, etc.);

2. Because of the description and comments in all the configuration files below, please be sure to delete the comments beginning with “×” after the configuration items during configuration.

Initialize environment

#Install build environment
$ yum groups install Development Tools -y
$ yum install perl -y
$ yum -y install redhat-rpm-config.noarch
$ yum -y install gd-devel
$ yum -y install perl-devel perl-ExtUtils-Embed
$ yum -y install pcre-devel
$ yum -y install openssl openssl-devel
$ yum -y install gcc-c++ autoconf automake
$ yum install -y zlib-devel
$ yum -y install libxml2 libxml2-dev
$ yum -y install libxslt-devel
$ yum -y install GeoIP GeoIP-devel GeoIP-data
$ yum install gperftools -y

Install libfastcommon

Perform the following operations on a, B and C machines respectively:

$ tar -zxvf libfastcommon-1.0.39.tar.gz
$ cd libfastcommon-1.0.39/
$ ./make.sh
$ ./make.sh install

libfastcommonInstalled in/usr/lib64/libfastcommon.so。 Note that the new and old versions will automatically link libfastcommon.so to the/usr/local/libUnder the directory. If it is an old version, you need to manually create:

ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so   ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so

If there islibfdfsclient.so, you can also add libfdfsclient.so to the/usr/local/libChina:

ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so   ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so

Note: it’s better to pass the command by yourselfls | grep libfastcommonstay/usr/lib/and/usr/local/libCheck to see if the link was successful.

Install tracker

Perform the following operations on a, B and C machines respectively:

$ mkdir -p /data/fastdfs/tracker
$ tar -zxvf fastdfs-5.11.tar.gz
$ cd fastdfs-5.11/
$ ./make.sh
$ ./make.sh install
$ා profile preparation
$CP / etc / FDFS / tracker.conf.sample/etc/fdfs/tracker.conf ා tracker node

To modify the tracker’s configuration file:

$ vim /etc/fdfs/tracker.conf
The content to be modified is as follows:
Max? Connections = 1024? Default 256, maximum connections
Port = 22122 ා tracker server port (22122 by default, generally not modified)
Base path = / data / fastdfs / tracker the root directory where logs and data are stored

Add the tracker.service service service, you can use systemctl to start, restart, stop and other operations of the service, and you can also set the startup and automatic restart.

$ා edit startup file
$ vim /usr/lib/systemd/system/fastdfs-tracker.service
[Unit]
Description=The FastDFS File server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start
ExecStop=/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf stop
ExecRestart=/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart

[Install]
WantedBy=multi-user.target

Preservation/usr/lib/systemd/system/fastdfs-tracker.serviceFile and exit the VIM editor. We start the fastdfs tracker service by doing the following:

$ systemctl daemon-reload
$ systemctl enable fastdfs-tracker.service
$ systemctl start fastdfs-tracker.service

After the tracker service is started, we can check whether the port is normal through the following command:

$netstat - tulnp| grep 22122 ා check whether the service is started and whether the port is open

Install sorage

This step can be omitted if the tracker has been decompressed during installation.

$ tar -zxvf fastdfs-5.11.tar.gz  
$ cd fastdfs-5.11/
$ ./make.sh
$ ./make.sh install
Machine a (group1 / group2)

Copy the configuration file of storage under the fastdfs-5.11 directory (two copies):

$ sudo mkdir -p /data/fastdfs/storage/group1
$ sudo mkdir -p /data/fastdfs/storage/group2
$sudo CP / etc / FDFS / storage.conf.sample / etc / FDFS / storage-group1.conf - group1 group of storage nodes
$sudo CP / etc / FDFS / storage.conf.sample / etc / FDFS / storage-group2.conf - group1 group of storage nodes
$sudo CP / etc / FDFS / client.conf.sample / etc / FDFS / client.conf - client file, for testing

According to the architecture design, we modify the above three files in turn:

  • Modify the configuration file of group1:/etc/fdfs/storage-group1.conf
$ sudo vim /etc/fdfs/storage-group1.conf
#The content to be modified is as follows
group_name=group1
Port = 23000 ා storage service port (default 23000)
Base? Path = / data / fastdfs / storage / group1? Root directory of data and log file storage
store_path_count=6
Store? Path0 = / data01 / fastdfs? The first storage directory of group1
Store? Path1 = / data02 / fastdfs? The second storage directory of group1
Store? Path2 = / data03 / fastdfs? The third storage directory of group1
Store? Path3 = / data04 / fastdfs? The fourth storage directory of group1
Store? Path4 = / data05 / fastdfs? The fifth storage directory of group1
Store? Path5 = / data06 / fastdfs? The sixth storage directory of group1
Tracker? Server = 10.58.10.136:22122? Tracker server IP and port
Tracker? Server = 10.58.10.137:22122? Tracker server IP and port
Tracker? Server = 10.58.10.138:22122? Tracker server IP and port
Http. Server ﹣ port = 8888 ﹣ HTTP port to access files (8888 by default, modified according to the situation, consistent with nginx)
  • Modify the configuration file of group2:/etc/fdfs/storage-group2.conf
$ sudo vim /etc/fdfs/storage-group2.conf
#The content to be modified is as follows
group_name=group2
Port = 33000 ා storage service port (default 23000, change to 33000)
Base? Path = / data / fastdfs / storage / group2? Root directory of data and log file storage
store_path_count=6
Store? Path0 = / data07 / fastdfs? The first storage directory of group2
Store? Path1 = / data08 / fastdfs? The second storage directory of group2
Store? Path2 = / data09 / fastdfs? The third storage directory of group2
Store? Path3 = / data10 / fastdfs? The fourth storage directory of group2
Store? Path4 = / data11 / fastdfs? The fifth storage directory of group2
Store? Path5 = / data12 / fastdfs? The sixth storage directory of group2
Tracker? Server = 10.58.10.136:22122? Tracker server IP and port
Tracker? Server = 10.58.10.137:22122? Tracker server IP and port
Tracker? Server = 10.58.10.138:22122? Tracker server IP and port
Http. Server ﹣ port = 8888 ﹣ HTTP port to access files (8888 by default, modified according to the situation, consistent with nginx)
  • To modify the client’s profile:/etc/fdfs/client.conf
$ sudo vim /etc/fdfs/client.conf
#The content to be modified is as follows
base_path=/data/fastdfs/client
Tracker? Server = 10.58.10.136:22122? Tracker server IP and port
Tracker? Server = 10.58.10.137:22122? Tracker server IP and port
Tracker? Server = 10.58.10.138:22122? Tracker server IP and port
  • After modifying the above three configuration files, we make the startup service:

fastdfs-storage-group1.service

$ vim /usr/lib/systemd/system/fastdfs-storage-group1.service
#Edit startup file
[Unit]
Description=The FastDFS File server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/bin/fdfs_storaged /etc/fdfs/storage-group1.conf start
ExecStop=/usr/bin/fdfs_storaged /etc/fdfs/storage-group1.conf stop
ExecRestart=/usr/bin/fdfs_storaged /etc/fdfs/storage-group1.conf restart

[Install]
WantedBy=multi-user.target

fastdfs-storage-group2.service

$ vim /usr/lib/systemd/system/fastdfs-storage-group2.service
#Edit startup file
[Unit]
Description=The FastDFS File server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/bin/fdfs_storaged /etc/fdfs/storage-group2.conf start
ExecStop=/usr/bin/fdfs_storaged /etc/fdfs/storage-group2.conf stop
ExecRestart=/usr/bin/fdfs_storaged /etc/fdfs/storage-group2.conf restart

[Install]
WantedBy=multi-user.target
  • After the service of storage is made, we can start two services of storage:
$ systemctl daemon-reload
$ systemctl enable fastdfs-storage-group1.service
$ systemctl start fastdfs-storage-group1.service

The startup process may fail due to some permissions or configuration writing errors. You can use thesystemctl status fastdfs-storage-group1.serviceView the status of the service, and combine with logs (log file in/data/fastdfs/storage/group1/logs/Under the directory), troubleshooting can quickly locate problems. If there are some abnormal conditions, you can also look at the back part of the pit to see if there are expected results.

$netstat - tulnp ා check whether the service is started and whether the port is open (2300033000)
  • After service, we can check the cluster status of fastdfs:
#View cluster status
$ fdfs_monitor /etc/fdfs/storage-group1.conf list

The console prints the following information, indicating that it is successful:

[2018-11-06 00:00:00] DEBUG - base_path=/data/fastdfs/storage/group1, connect_timeout=30, network_timeout=60, tracker_server_count=2, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0
server_count=3, server_index=0
tracker server is 10.58.10.136:22122,10.58.10.137:22122,10.58.10.138:22122
group count: 2
Group 1:
...
  • Upload files through client test
#After saving, the test returns ID indicating success, such as group1 / M00 / 00 / 00 / xx.txt
$ fdfs_upload_file /etc/fdfs/client.conf test.txt
Machine B (group1 / group3)

The configuration process is similar to machine a. the following points need to be modified:

  • Create directory and copy profile.
$ sudo mkdir -p /data/fastdfs/storage/group1
$ sudo mkdir -p /data/fastdfs/storage/group3
$sudo CP / etc / FDFS / storage.conf.sample / etc / FDFS / storage-group1.conf - group1 group of storage nodes
$sudo CP / etc / FDFS / storage.conf.sample / etc / FDFS / storage-group3.conf - group3 group of storage nodes
$sudo CP / etc / FDFS / client.conf.sample / etc / FDFS / client.conf - client file, for testing
  • Modify the configuration file of group3/etc/fdfs/storage-group3.conf, the configuration content of group1 is consistent with that of A.
$ sudo vim /etc/fdfs/storage-group3.conf
#The content to be modified is as follows
group_name=group3
Port = 43000 ා storage service port (default 23000, change to 43000)
Base? Path = / data / fastdfs / storage / group3? Root directory of data and log file storage
store_path_count=6
Store? Path0 = / data07 / fastdfs? The first storage directory of group3
Store? Path1 = / data08 / fastdfs? The second storage directory of group3
Store? Path2 = / data09 / fastdfs? The third storage directory of group3
Store? Path3 = / data10 / fastdfs? The fourth storage directory of group3
Store? Path4 = / data11 / fastdfs? The fifth storage directory of group3
Store? Path5 = / data12 / fastdfs? The sixth storage directory of group3
Tracker? Server = 10.58.10.136:22122? Tracker server IP and port
Tracker? Server = 10.58.10.137:22122? Tracker server IP and port
Tracker? Server = 10.58.10.138:22122? Tracker server IP and port
Http. Server ﹣ port = 8888 ﹣ HTTP port to access files (8888 by default, modified according to the situation, consistent with nginx)
  • The configuration file of the client is the same as that of the a machine. There is no duplication here.
  • Make a service to start group3,fastdfs-storage-group1.serviceSame as machine a, just copy.
$ vim /usr/lib/systemd/system/fastdfs-storage-group3.service
#Edit startup file
[Unit]
Description=The FastDFS File server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/bin/fdfs_storaged /etc/fdfs/storage-group3.conf start
ExecStop=/usr/bin/fdfs_storaged /etc/fdfs/storage-group3.conf stop
ExecRestart=/usr/bin/fdfs_storaged /etc/fdfs/storage-group3.conf restart

[Install]
WantedBy=multi-user.target
  • Execute the startup script untilfastdfs-storage-group1.serviceandfastdfs-storage-group3.serviceJust serve.
C machine (group2 / group3)

The configuration process is similar to machine a. the following points need to be modified:

  • Create directory and copy profile.
$ sudo mkdir -p /data/fastdfs/storage/group2
$ sudo mkdir -p /data/fastdfs/storage/group3
$sudo CP / etc / FDFS / storage.conf.sample / etc / FDFS / storage-group2.conf - group1 group of storage nodes
$sudo CP / etc / FDFS / storage.conf.sample / etc / FDFS / storage-group3.conf - group3 group of storage nodes
$sudo CP / etc / FDFS / client.conf.sample / etc / FDFS / client.conf - client file, for testing
  • Modify the configuration file of group2/etc/fdfs/storage-group2.conf, the configuration content of group3 is consistent with that of machine B.
$ sudo vim /etc/fdfs/storage-group2.conf
#The content to be modified is as follows
group_name=group2
Port = 33000 ා storage service port (default 23000, change to 33000)
Base? Path = / data / fastdfs / storage / group2? Root directory of data and log file storage
store_path_count=6
Store? Path0 = / data01 / fastdfs? The first storage directory of group2
Store? Path1 = / data02 / fastdfs? The second storage directory of group2
Store? Path2 = / data03 / fastdfs? The third storage directory of group2
Store? Path3 = / data04 / fastdfs? The fourth storage directory of group2
Store? Path4 = / data05 / fastdfs? The fifth storage directory of group2
Store? Path5 = / data06 / fastdfs? The sixth storage directory of group2
Tracker? Server = 10.58.10.136:22122? Tracker server IP and port
Tracker? Server = 10.58.10.137:22122? Tracker server IP and port
Tracker? Server = 10.58.10.138:22122? Tracker server IP and port
Http. Server ﹣ port = 8888 ﹣ HTTP port to access files (8888 by default, modified according to the situation, consistent with nginx)
  • The configuration file of the client is the same as that of the a machine. There is no duplication here.
  • Make a service to start group2 and group3. It already exists above. Just copy it directly.
  • Execute the startup script untilfastdfs-storage-group2.serviceandfastdfs-storage-group3.serviceJust serve.

Installing nginx modules for nginx and fastdfs

step1In the fastdfs directory, change thehttp.confandmime.typesFile copy to/etc/fdfsUnder the directory, nginx is supported to access the storage service.

#To be performed on all three machines
$CP. / conf / http.conf / etc / FDFS / ා for nginx access
$CP. / conf / mime. Types / etc / FDFS / ා for nginx access

step2Install nginx module of fastdfs:

#To be performed on all three machines
$ tar -zxvf V1.20.tar.gz
$ cp fastdfs-nginx-module-1.20/src/mod_fastdfs.conf /etc/fdfs/mod_fastdfs.conf

step3Then modifyfastdfs-nginx-module-1.20/src/configFiles, findingngx_module_incsandCORE_INCSIn two places, it is modified as follows:

ngx_module_incs=”/usr/include/fastdfs /usr/include/fastcommon/”
CORE_INCS=”$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/”

If it is not modified, this error will occur when compiling nginx:/usr/include/fastdfs/fdfs_define.h:15:27: fatal error: common_define.h: No such file or directory

step4After that, we unzip nginx and install the nginx service:

$ tar -zxvf nginx-1.12.2.tar.gz
$ cd nginx-1.12.2
$ ./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_auth_request_module --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' --add-module=${YOUR_PATH}/fastdfs-nginx-module-1.20/src
$ make
$ make install

Attention, up${YOUR_PATH}Change tofastdfs-nginx-module-1.20Ensure the path is correct.

  • Modify machine a/etc/fdfs/mod_fastdfs.confProfile:
connect_timeout=2
network_timeout=30
base_path=/data/fastdfs/ngx_mod
load_fdfs_parameters_from_tracker=true
storage_sync_file_max_delay = 86400
use_storage_id = false
storage_ids_filename = storage_ids.conf
Tracker? Server = 10.58.10.136:22122? Tracker server IP and port
Tracker? Server = 10.58.10.137:22122? Tracker server IP and port
Tracker? Server = 10.58.10.138:22122? Tracker server IP and port
Group name = group1 / Group2 Global
url_have_group_name = true
log_level=info
log_filename=
response_mode=proxy
if_alias_prefix=
flv_support = true
flv_extension = flv
group_count = 2
[group1]
Group name = group1 in group
storage_server_port=23000
store_path_count=6
store_path0=/data01/fastdfs 
store_path1=/data02/fastdfs 
store_path2=/data03/fastdfs 
store_path3=/data04/fastdfs 
store_path4=/data05/fastdfs
store_path5=/data06/fastdfs 
[group2]
group_name=group2
storage_server_port=33000
store_path_count=6
store_path0=/data07/fastdfs 
store_path1=/data08/fastdfs 
store_path2=/data09/fastdfs 
store_path3=/data10/fastdfs 
store_path4=/data11/fastdfs
store_path5=/data12/fastdfs
  • Configure the configuration file of nginx to access storage
$ sudo vim /etc/nginx/nginx.conf
user nginx;
worker_prosesses on;
worker_rlimit_nofile 65535;

error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 65535;
    use epoll;
    accept_mutex off;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    gzip                on;

    server_names_bucket_hash_size 128;
    client_header_buffer_size 32k;
    larger_client_header_buffers 4 32k;
    client_max_body_size 300m;

    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Connection '';
    proxy_set_header REMOTE-HOST $remote_addr;
    proxy_set_header HOST $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_connet_timeout 90;
    proxy_send_timeout   90;
    proxy_read_timeout   90;
    proxy_buffer_size 16k;
    proxy_buffers 8 64k;
    proxy_busy_buffers_size 128k;
    proxy_temp_file_write_size 128k;


    proxy_cache_path /data/fastdfs/cache/nginx/proxy_cache levels=1:2
    keys_zone=http-cache:200m max_size=1g inactive=30d;
    proxy_temp_path /data/fastdfs/cache/nginx/proxy_cache/temp;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        #include /etc/nginx/default.d/*.conf;

        location ~ ^/ok(\..*)?$ {
            return 200 "OK";
        }

        location /nginx {
            stub_status on;
        }

        location /healthcheck {
            check_status on;
        }

        location ^~ /group1/ {
            proxy_next_upstream http_502 http_504 error timeout invalid_header;
            proxy_cache http-cache;
            proxy_cache_valid 200 304 12h;
            proxy_cache_key $uri$is_args$args;

            add_header 'Access-Control-Allow-Origin' $http_origin;
            add_header 'Access-Control-Allow-Credentials' 'true';
            add_header "Access-Control-Allow-Methods" "GET, POST, HEAD, PUT, DELETE, OPTIONS, PATCH";
            add_header "Access-Control-Allow-Headers" "Origin, No-Cache, Authorization, X-Requested-With, If-Modified-Since, Pragma, Last-Modified, Cache-Control, Expires, Content-Type";
            if ($request_method = 'OPTIONS') {
                return 200 'OK';
            }

            proxy_pass http://fdfs_group1;
            expire 30d;
        }
        location ^~ /group2/ {
            proxy_next_upstream http_502 http_504 error timeout invalid_header;
            proxy_cache http-cache;
            proxy_cache_valid 200 304 12h;
            proxy_cache_key $uri$is_args$args;

            add_header 'Access-Control-Allow-Origin' $http_origin;
            add_header 'Access-Control-Allow-Credentials' 'true';
            add_header "Access-Control-Allow-Methods" "GET, POST, HEAD, PUT, DELETE, OPTIONS, PATCH";
            add_header "Access-Control-Allow-Headers" "Origin, No-Cache, Authorization, X-Requested-With, If-Modified-Since, Pragma, Last-Modified, Cache-Control, Expires, Content-Type";
            if ($request_method = 'OPTIONS') {
                return 200 'OK';
            }

            proxy_pass http://fdfs_group2;
            expire 30d;
        }
        location ^~ /group3/ {
            proxy_next_upstream http_502 http_504 error timeout invalid_header;
            proxy_cache http-cache;
            proxy_cache_valid 200 304 12h;
            proxy_cache_key $uri$is_args$args;

            add_header 'Access-Control-Allow-Origin' $http_origin;
            add_header 'Access-Control-Allow-Credentials' 'true';
            add_header "Access-Control-Allow-Methods" "GET, POST, HEAD, PUT, DELETE, OPTIONS, PATCH";
            add_header "Access-Control-Allow-Headers" "Origin, No-Cache, Authorization, X-Requested-With, If-Modified-Since, Pragma, Last-Modified, Cache-Control, Expires, Content-Type";
            if ($request_method = 'OPTIONS') {
                return 200 'OK';
            }

            proxy_pass http://fdfs_group3;
            expire 30d;
        }

        location ~/purge(/.*) {
            allow 127.0.0.1;
            allow 192.168.1.0/24;
            allow 10.58.1.0/24;
            deny all;
            proxy_cache_purge http-cache $1$is_args$args;
        }
    }

    server {
        listen 8888;
        server_name localhost;
        
        location /ok.htm {
            return 200 "OK";
        }

        location ~/group[0-9]/ {
            ngx_fastdfs_module;
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root html;
        }
    }

    upstream fdfs_group1 {
        server 10.58.10.136:8888 max_fails=0;
        server 10.58.10.137:8888 max_fails=0;

        keepalive 10240;
        check interval=2000 rise=2 fall=3 timeout=1000 type=http default_down=false;
        check_http_send "GET /ok.htm HTTP/1.0\r\nConnection:keep-alive\r\n\r\n";
        check_keepalive_requests 100;
    }

    upstream fdfs_group2 {
        server 10.58.10.136:8888 max_fails=0;
        server 10.58.10.138:8888 max_fails=0;

        keepalive 10240;
        check interval=2000 rise=2 fall=3 timeout=1000 type=http default_down=false;
        check_http_send "GET /ok.htm HTTP/1.0\r\nConnection:keep-alive\r\n\r\n";
        check_keepalive_requests 100;
    }

    upstream fdfs_group3 {
        server 10.58.10.137:8888 max_fails=0;
        server 10.58.10.138:8888 max_fails=0;

        keepalive 10240;
        check interval=2000 rise=2 fall=3 timeout=1000 type=http default_down=false;
        check_http_send "GET /ok.htm HTTP/1.0\r\nConnection:keep-alive\r\n\r\n";
        check_keepalive_requests 100;
    }

}
  • Start nginx service:
sudo nginx -c /etc/nginx/nginx.conf

Visit http://localhost/ok.htm to see if 200 status is returned with OK content. If the startup fails, you can view the log of nginx in the/var/log/nginx/error.logIn the file, this is also the default error log path of nginx in CentOS. If you have modified the error log, view the error information in the file specified by the error log.

  • The configuration of machine B is basically the same as that of machine C, and the configuration file of nginx is exactly the same. The only thing that needs to be modified is/etc/fdfs/mod_fastdfs.confIn group name, modify the corresponding name according to the storage configuration in the node. However, it should be noted that x in the square bracket [groupx] must be increasing in order, starting from 1.

Summary of problems

In case of problems, first, we use the information printed by console to check the reasons for the failure of starting the service; second, we use the log to check. Most of the time, log is the most effective way to check, not one of them. The log of the tracker or storage service is stored in the corresponding configuration file of its servicebase_pathConfiguration item specifies thelogsUnder the directory. For the log of nginx, if you have enabled the log path you specified, you can find it under the path you specified. By default, under CentOS, most of the/var/log/nginx/Under the directory. Of course, in this environment, we are not only troubleshooting nginx, but also the extension module of fastdfs to nginx. At this time, we need to consider the troubleshooting of the logs of fastdfs nginx module. The log storage path is/etc/fdfs/mod_fastdfs.confIn configurationbase_pathThe specified path.

Multiple group configuration ports in a cluster

Each group needs an independent storage service, so the ports of the storage group on the same host cannot conflict. Here, I plan group 1 as 23000, group 2 as 33000, and group 3 as 43000

Groups with the same name cannot be on the same machine, because when the tracker is scheduled, groups with the same name will be synchronized automatically, and the same port number is required. Therefore, the group with the same name on the two hosts must ensure that the storage ports are consistent.

Configuration of multiple group storage paths in a cluster

Different group storage paths of the same host should not be put together, but should be configured separately.

The same group storage paths of different hosts can be different, but the number must be the same and the size should be basically the same.

Configuration of multiple storage in the same node

On the same machine, we deployed two groups, a cluster of three machines. Each group keeps a copy in another node. Use commandsystemctl start fastdfs-storage-groupx.serviceWhen starting, another group cannot be started (groupx is group1, group2, group3). Always shown asLoaded: loaded (/usr/lib/systemd/system/fastdfs-storage-group1.service; enabled; vender preset: disabled)Until the best failsActive: inactiveTurn intoActive: exited(the following figure shows tracker’s, and the storage problem is similar.).

Distributed fastdfs cluster deployment

Figure 3 storage group failed to start

Solution at that time: every storage needs to be startedsystemctl daemon-reloadOnce, then start it.
If you want to start thefastdfs-storage-group1.serviceandfastdfs-storage-group2.service, you need to do the following:

  • Start fastdfs-storage-group1.service first
sudo systemctl daemon-reload
sudo systemctl enable fastdfs-storage-group1.service
sudo systemctl start fastdfs-storage-group1.service
  • Observe the status of fastdfs-storage-group1.service:
systemctl status fastdfs-storage-group1.service
  • When active is running, it indicates that the startup is successful, and then fastdfs-storage-group2.service is started; otherwise, if the startup is not successful, check the log to find the problem:
sudo systemctl daemon-reload
sudo systemctl enable fastdfs-storage-group2.service
sudo systemctl start fastdfs-storage-group3.service
  • Observe the status of fastdfs-storage-group2.service until it gets up.

The warning log of unkown lvalue ‘execrestart’ in section ‘service’ may appear. No solution has been found at present. At that timeyum install systemd-*The order of the company didn’t solve the problem.

Nginx module configuration of fastdfs

On all storage machines, modify/etc/fdfs/mod_fastdfs.confWhen modifying the value in group name according to the group in different storage, two configuration items need to be noted. The following is an example of group2 and group3:

  • Global group name

The group name in the global configuration must be divided with “/”, and the value is the value of the group name in the local configuration. For example:

group_name=group2/group3。
  • Group name in group
    The group ID must start from group1, followed by group2. The ID here is the string in square brackets, similar to [MySQL] in MySQL configuration file.
Group name = group2 / group3 Global
...
group_count = 2
[group1] be sure to pay attention to this place
Group name = in group 2
storage_server_port=33000
store_path_count=6
......
[group2] ා and this place is orderly
group_name=group3
storage_server_port=43000
store_path_count=6
......

Finally, I understand that mistakes are inevitable. You are welcome to criticize and correct them. If there is anything wrong, please point out. Later, I will explain the relationship between the tracker and storage of fastdfs in detail.


This article was first published in the blog of “empty mind like Valley”. Please sign your name before reprinting, and indicate the source of reprinting.

Those who are good in the ancient times are subtle and profound. The only thing I can’t know is my strength

Henan is like a river in winter. It’s as if it’s afraid of its neighbors. It’s as if it’s a guest. It’s as slack as ice. It’s as simple as a town. It’s as wide as a valley. It’s as muddy as a cloud.

Which can be turbid and quiet? Who can live with ease?

I want to keep this way. The only thing I can do is not gain, so I can open up and make a new one.

Please pay attention to my WeChat public address:Rain is like playing the piano,Thanks♪(・ω・)ノ
微信二维码