From monomer to cluster to micro service [1]

Time:2022-5-21

1. Monomer architecture:

This is our initial system architecture: no matter what kind of client we are and what kind of UI presentation is, there is only one back-end, which is relatively simple;

                          

In the past, the project was a process, and various module projects were put together. With the development of business and the continuous growth of data volume and traffic, the monomer is not enough. [eg: a stone is too big to move] what should we do?

  • Or change Hercules [upgrade hardware]: but: there is also an upper limit for upgrading hardware;
  • Or more people can move it. There are two common ways:
  1. Split Vertically
  2. level

So what is vertical? What is level?

A: Vertical: that is, system splitting, which is divided into multiple cooperation according to system business and logic.

Things that cannot be done by one person shall be split according to the system business, and one person shall do part of them and finish them;

                     

However, vertical also has certain limitations. If there are many splits, it will be fragmented. Not only does it not use management, the pressure on a single node will still be great;

B: Level: that is what we usually call cluster load balancing

In the past, this logic was handled by one instance, but now it is replaced by multiple instances. Do the same thing, and replace it with multiple instances. Who should I call when the request comes? At this time, we use load balancing and use nginx for forwarding

If one person can’t do it, we’ll all come to do the same thing. When the request comes, we’ll distribute it and do everything

Extension: nginx — > high performance HTTP and reverse proxy web server; Working principle: as a reverse proxy, after the request, configure the policy and forward the request according to the policy;

                  

In fact, they all do the same thing. They all try to use limited servers and computing resources to process the original requests to meet some demands of high concurrency and big data;

Now there are three service instances. They all need to write logs, so we can’t write them all. In this way, we can repeatedly build common things, and we can extract them for service sharing, that is, code reuse

If a project has 100 function points, which are allocated according to the principle of 2:8, only 20% of the business is commonly used, they will bear 80% of the traffic, and 80% is a process. The resources at that point are shared by all programs, which will not pay off. Then we can split 20% of the high-frequency services, share the services, increase the computing resources in the independent part, allocate more resources and less others, so as to maximize the utilization of resources;

                 

2. Distributed

If multiple projects in the project architecture separate some operations, logic, business or services and share the call as a service, a request will become: I call a, a calls logserver, so we come to a new term: distributed

Distributed: multi process assistance to complete business;

A: Distributed costs:

So is distributed that good? Eg: there is a word called “if you want to wear a crown, you must become its weight”. Eg: distributed lock, distributed transaction, and its complexity;

Distributed lock: one person can handle one thing at most, and two people will have problems when operating at the same time, so we need to do some process mutual exclusion;

Distributed transaction: your operation succeeded, but my operation failed, so we have to do some operations. To ensure the consistency of data;

Why is it so troublesome that we still use it? Benefits: independent operation and maintenance, independent expansion, independent deployment [enjoy your own separate hardware resources] and make better use of resources;

3. Microservice architecture

With the passage of time, the business forces the continuous development of technology, and the problems of distribution have been solved. Then distribution has become a conventional means and deduction has become a micro service in the afternoon;

A. Definition:

Microservice architecture is an architecture pattern (architecture style) that uses distributed services to split business logic and complete decoupling.

B: How to split micro services?

Eg: there is one layer in the three-tier architecture. The DLL business logic layer – UI is responsible for calling

Microservices – the business logic is put in the service – the UI client is responsible for calling, the cluster, the deployment, the operation and maintenance, the registration and discovery, the gateway, all kinds of things page, become microservices

So is it really as simple as we describe?

In the past, when we called Bll, we succeeded or failed in a process, and the results were clear at a glance. Then we became a microservice, and the calling service was slow or failed. The reasons may be a series of possible problems, such as code, network packet loss, jitter, server, database hanging, timeout and so on

From single process to distributed, everything is different

 

3. Full component analysis of microservice architecture

The methods are divided into independent services. How to ensure that the project is available?

1. Ensure high availability of services;

2. Service scalability;

A. Core foundation: high availability

    

Problem: the risk is too high — serial structure, any node goes wrong, and the whole process breaks down. This is the high-voltage line of the project. If the availability of a service is 99%, then 2, 3 Seven, eight? I can’t imagine the consequences, which fully reflects the importance of availability

How can we guarantee the availability? The answer is cluster

     

Nginx + keepalived do IP drift. We use nginx as reverse proxy load balancing, and send client requests to the server by polling through nginx. What if one of our deployed nginx hangs? Then we will deploy two and use keepalived to monitor the health status of nginx. When nginx goes down, it will automatically switch between active and standby to realize IP drift to achieve high availability

B. Scalability:

There are always services that deal with multiple clients. Our system will also have uneven pressure in different periods of time. Even if the pressure suddenly increases by 10100 times, it will lead to the collapse of our system. When the problem comes, what should we do? It’s impossible for us to prepare 100 redundant resources in advance.

Our expectation is that it can automatically scale and open new nodes or close nodes with pressure. (docker + swarm, or k8s)

        

If you are outside and can’t pass the request, how can you bring it into the cluster for management and use?

The attribute of nginx itself: it can be automatically reduced. When it is reduced, nginx will be automatically restarted. If a new node is added, it cannot be automatically expanded. Modify the configuration and restart the nginx service. What should we do now?

1. Registration discovery

          

What if you add one?

     

Node deletion

        

If this feature is satisfied, automatic scaling will be achieved, and Consul will be completed, and our architecture will evolve into:

      

Grpc: LAN internal transmission has higher performance. We still use it externally core webapi

Question:

1. Service exposure: it can be seen from so many service instances that it is still dangerous to expose so many ports;

2. Load balancing;

3. Trouble of calling service;

A new gateway is added to solve the above problems by means of package layer. The gateway geteway handles it, and the work is similar to that of nginx;

Their differences: nginx is a separate server. To expand, you need to write Lua script, but the gateway needs many additional functions;

2. Gateway

The gateway uses Ocelot, Kong. It is recommended to use Ocelot, [c# written for easy expansion, produced by Microsoft]

       

It can be seen from the figure that the gateway is the entrance and exit of all networks, which makes it particularly important in the whole architecture. Then we should consider its availability. If one hangs up, the whole process is over, then we will cluster;

     

After clustering, how can I access so many gateways? Can I remember four addresses? Which one to call?

        

Problem: when the gateway invokes the service instance, it still needs to worry about the service hanging and service timeout. Therefore, in addition to service registration and forwarding, the gateway in this place also needs to do routing mapping,,,,,

Request without gateway:

                     

Request with gateway added:

                  

With a gateway, the relationship can be sorted out clearly

Request timeout:

        

Avalanche: in a serial link, the failure of any node will lead to the collapse of the whole link. How to deal with this problem

Time limited solution: define 1s request, > 1s. No matter whether you succeed or not, I won’t care about the result. I think fail. Although one of them fails, it ensures the health of the whole link

Current limiting:

   

Fusing, degradation, etc

    

Finally, in addition to the above, there are authentication and authorization

    

Authentication authorization is added. When requesting, take the token with you. If you have permission, you can request. If you don’t have permission, you can only go out and turn left to go home

At this point, the core model is completed

Question Thinking: how can we split the system to ensure high cohesion and low coupling?