Understand the evolution of technical architecture in actual project requirements

Time:2022-6-10

1. Not all eggs can be put in one basket — evolution from single application to microservice application

Imagine that there is an owner in your upstairs who is decorating, and the improper construction has led to the leakage of a bedroom in your home. What do you do at this time?

Normal people call people to repair, and then temporarily move to another room to sleep. Instead of the whole family moving to the hotel to sleep while the maintenance personnel were repairing the bedroom. Because only that room leaks water, other rooms can still be used normally.

Embodiment in the procedure:

Single application:

All modules of the project are packaged together and then thrown to the server for deployment and operation. If this project is an e-commerce project, there are order modules, delivery modules, etc. You think of these modules as your home rooms. One module corresponds to one room. Now the room corresponding to the module is leaking. What should we do at this time? No way, the whole family can only go out to live. Why, because all your modules are packed into one project, and one module hangs, the whole project has to stop, and then start the project to continue running after the delivery module is repaired. The shortcomings are clear at a glance.

Microservice application:

In order to solve the shortage of single application, the concept of microservice was born. The core is “dismantlement”. A whole project is divided into small projects according to modules. All the split small projects cooperate and communicate with each other to form the original overall project. What’s the advantage of this? When our project is running, the same delivery module hangs. At this time, only this module service hangs. The services of other modules can still run normally. Users can still place orders normally, and the boss can still make money normally. The maintenance personnel only need to repair the delivery module and then run it again, and then add it to continue to deliver the order to the user. Of course, the most important thing is that you don’t need the whole family to go out to sleep!

 

 

 

 Later, we will talk about Ali’s set.


2. How to deal with the high concurrency of system applications

When I was in school, my graduation course was set as “online job search and recruitment management system”. After that, I tested with my classmates and roommates and responded quickly to the request. There was nothing wrong. However, when the interviewer asked me about the concurrency and performance of the system during the interview, I could only say that the weather was really good today.

In fact, when the company is doing projects, it is not only that the system can run normally, but also that the system can withstand the concurrent requests of users. When the number of users is huge, we need better machines, better CPUs, larger memory and better network environment to cope with the high concurrency of requests.

In order to cope with this high concurrency, there are two processing methods:

Vertical expansion

That is, on the basis of one server, constantly upgrade its accessories, add more disks, upgrade larger memory, and buy better and faster CPUs A more powerful server can certainly handle higher concurrency. To sum up, recharging makes you stronger.

However, this also has disadvantages. First of all, the boss and customers are not necessarily willing to spend money (I didn’t say money). More importantly, even if you upgrade all the accessories of this server to the best. However, there are still bottlenecks in the processing performance of a single server. Then what shall I do? As the saying goes, three cobblers can equal one Zhugeliang. So horizontal expansion appears.

Horizontal expansion

Since one server has a bottleneck, it is necessary to change several machines to handle user requests and transform the single application into a distributed cluster application. It not only saves money, but also has the ability to handle concurrency.

3. Cache architecture improves system read capability

Let’s talk about a simple scenario: the foreground user accesses the data of the system application, and the background is returned to the front end for display by querying the data of the database. In the case of low traffic, the system can properly handle this simple process. However, if you think about it, under the “double 11” campaign on Taobao, thousands of users visit Taobao at the same time to buy goods. At the same time, millions of users request to call the backstage database, which can directly destroy the backstage database.

To prevent this from happening, the cache appears. Add a layer of cache in the middle of the original logic. When the foreground user requests, he does not need to operate the data in the database every time. Instead, he temporarily stores the data required by the application in the cache. When the request comes, he first checks the cache. If there is something in the cache, he does not need to check in the database. In this way, it can greatly reduce the situation that the database is directly crashed by the user. Moreover, the cache is in memory. The user does not need to go to the i/o disk to request, but directly read the memory data, which is very fast.

4. Asynchronous architecture improves system write ability

The emergence of cache architecture has greatly improved the reading ability of programs. Is there any way to improve the writing ability of the system? At this time, the concept of asynchronous architecture appears, that is, the application of message queue.

Synchronization concept

When delivering express, the courier often asks the buyer to sign his name when receiving the express. Sometimes, when the buyer is not at home, he still needs to stand at the door and wait for the buyer to come back, wasting his time delivering the express.

Embodiment in the program: system a sends a message to another system B. at this time, system a can only carry out other operations after receiving the reply from system B. otherwise, the process will be stuck here all the time. The problem is that if system B can reply immediately, it is OK. If it is delayed because of the network and other reasons, the CPU resources of system a will be waiting there all the time, that is to say, those CPU resources will be wasted.

Asynchronous concept

When delivering express, the courier doesn’t need the user to sign directly. I’ll just put it in the express cabinet for you. When you go home, you can directly go to the express cabinet to scan the code and pick up the pieces.

Embodiment in the program: you can send a message to your good friend on wechat. After sending it, you can play whatever your mobile phone wants. You can play other games on your mobile phone without waiting for the other party to return your message.

Flow diagram:

You only need to send the message to the newly added message queue component in the middle. The message queue will return to you immediately after receiving the message you sent. As for the operation of sending the message to another user, the message queue will handle it by itself, so you don’t need to care.

Benefits:

  • Give you quick response, avoid waiting in vain and waste system resources.

  • Traffic peak shaving. Under the Taobao double 11 campaign, you join this message queue. Thousands of users request to come. If the system can’t handle it for a while, you can put the user’s request in this message queue for the system to queue up for processing.

  • Reduce the coupling degree of the system, and add message queues between the caller system and the callee system, so that there is less code dependency between the two systems, and the interactive part can be handled by the intermediate message queue.

5. Solve the problem of machine not working — the emergence of load balancing

As mentioned earlier, using multiple machines instead of one machine is used to process the requests of foreground users. So the problem is, the applications from the front end are sent to the back end, and the three servers in the back end are wide eyed. Who will work? Then load balancing appears.

When the request comes to the background, it must first be processed by the load balancer processor. It can be imagined that the load balancer is a contractor. When the project comes, it will be assigned to the younger brother under his management.

Of course, this load balancing involves many different algorithms, such as polling, random, most, least Etc. To put it simply, the polling algorithm contractor will distribute the tasks assigned above equally to each younger brother. Of course, the contractor can also deliberately ask you to do more or less according to your mood. This is the load balancing algorithm you choose. A new chapter will be opened later.

6. Cunning rabbit Grottoes – data storage

As mentioned earlier, a large number of user requests require access to the database. Have you ever thought about what to do if the system has only one database and the database breaks down due to power failure? The whole system will be cool. In order to solve this problem, it is necessary to back up the data and provide user services with multiple database servers. When one database server hangs up, others can be powered up to continue providing services to users.

In professional terms, master-slave replication means that the whole server has one master server and multiple slave servers. The slave servers synchronize the data of the master server regularly. In this way, when the master server hangs up, the system can still run normally on the top of the slave server.

This technique is used in most databases, such asmysqlredis, which will be split into a single article later.

7. Looks not too smart – Search Engine

I believe that for programmers, we are familiar with the phrase “programming for Baidu and Google”. However, when there is a bug in the program, I open Baidu’s search bar and don’t know how to search for problems after thinking for a long time. In order to quickly find a solution to the problem, of course, the closer the keyword is to the problem, the more accurate the answer will be.

Large search sites like Baidu and Google have their own set of search engines. According to the algorithms in these search engines, we can sort and display the answers to our questions according to the search keywords we provide.

The popular search engines areessolrA single article will be introduced later.

 

Recommended Today

[Design Patterns] Design Patterns – Design Principles

Design Patterns – Design Principles 1. Single principle The English name of the Single Responsibility Principle is the Single Responsibility Principle, or SRP for short. The Single Responsibility Principle states that a class should have one and only one reason for it to change, otherwise the class should be split. There should never be more […]