Load balancing, distributed, cluster understanding, how to synchronize multiple server codes

Time:2019-11-30

colony

If our project runs on a machine, if the machine fails, or if the user’s request is high and one machine can’t support it. Our website may not be accessible. How to solve it? We need to use multiple machines, deploy the same program, and let several machines run our website at the same time. Then how to distribute the request to all our machines. So the concept of load balancing has emerged.

load balancing

Load balancing refers to that all requests can be distributed to different servers based on the reverse agent according to the specified policy algorithm. Commonly used to achieve load balancing can use nginx, LVS. But now there is also a problem. What if there is a problem with the load balancing server? The concept of all redundancies emerges.

redundancy

Redundancy is actually two or more servers, one master server and one slave server. Suppose that there is a problem with the load balancing server of the primary server, the secondary server can replace the primary server to continue the load balancing. The implementation is to use keepalive to preempt the virtual host.

Distributed

In fact, distribution is to separate a large project and run it separately.

Take the example above. Let’s say we have a very large number of visitors. We can make it distributed, just like CDN. In Beijing, Hangzhou, Shenzhen, three places build a similar cluster. Users close to Beijing will visit the cluster in Beijing, and those close to Shenzhen will visit the cluster in Shenzhen. In this way, we will divide our network warfare into three regions, each of which is independent.

Another example is redis. Redis distributed is to distribute the data in redis to different servers, each server stores different contents, and MySQL Cluster is that each server has the same data. This leads to an understanding of the concepts of distribution and clustering.

MySQL master slave

The MySQL master server will write the SQL operation log to the bin.log log log. The slave server will read the bin.log log log of the master and execute the SQL statement.

The master and slave have the following questions.

1. The master server can write and read, while the slave server can only write.

The data read by slave has not been written, so how to solve it?

1. If it is cached, read it from the cache.
2. Force reading from the master.
3. With PXC cluster, any node is readable and writable, with strong read-write consistency.

How to solve data inconsistency with laravel

Set sticky to true in the config / database.php MySQL configuration block

Sticky is an optional value that can be used to immediately read records that have been written to the database during the current request cycle. If the sticky option is enabled and a write operation has been performed in the current request cycle, any read operation will use a write connection. This ensures that the data written in the same request cycle can be read immediately, so as to avoid data inconsistency caused by master-slave delay. However, whether it is enabled or not depends on the requirements of the application.

How can we synchronize our code to multiple servers?

Laravel provides us with the extension package laravel / envy, which provides a set of simple and lightweight syntax for defining the daily tasks of remote servers. Blade style syntax can be used to configure deployment tasks and execute artisan commands.

composer global require laravel/envoy

 

   

Envoy tasks should be defined in envoy.blade.php under the root directory of the project. Write the following

@servers(['web-1' => '192.168.1.1', 'web-2' => '192.168.1.2'])

 

@task('deploy', ['on' => ['web-1', 'web-2']])

    cd site

    git pull origin {{ $branch }}

    composer update

    php artisan migrate

@endtask

 

   

The above code means that when the command line envoy run deploy, we will SSH to web-1 and web-2 for execution

cd site

git pull origin {{ $branch }}

php artisan migrate

 

   

Of course, the premise is that we have joined SSH to the remote server.