One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)


By Xiaotu Alibaba senior engineer

Sister reading recommendationThe necessary knowledge map of distributed system design in the cloud primary era (including 22 knowledge points)

Guidance:This paper tries to introduce the outline of distributed knowledge system based on MSA (micro Service Architecture) from the aspects of distributed basic theory, architecture design mode, engineering application, deployment, operation and maintenance, and industry scheme, so as to have a three-dimensional understanding of the evolution from SOA to MSA; to further understand the essence of micro service distribution in concept and tool application, and how to feel personally. The process of building a complete set of microservice architecture.

Follow the “Alibaba cloud native” public account, reply“distribution”, you can download a clear and large picture of the distributed system and its knowledge system!

With the development of mobile Internet and the popularization of intelligent terminals, computer systems have long been transitioning from single machine independent work to multi machine cooperation. Clusters build huge and complex application services according to the distributed theory. On the basis of distributed, they are carrying out a cloud original technology revolution, completely breaking the traditional development mode and liberating the new generation of productivity.

Large map of distributed system knowledge system

One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)

Follow the “Alibaba cloud native” public account, reply“distribution”, you can download a clear and large picture of the distributed system and its knowledge system!

Basic theory

The evolution from SOA to MSA

SOA Service Oriented Architecture

As the business develops to a certain extent, services need to be decoupled, and then a single large system is logically divided into different subsystems to communicate through the service interface. The service-oriented design mode ultimately needs bus integration services, and most of the time also shares the database. When a single point of failure occurs, it will lead to bus level failure, and further may drag down the database, so there is a more independent design scheme.

One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)

MSA microservice architecture

Microservices are truly independent services. From the service entrance to the data persistence layer, they are logically independent and isolated. There is no need for service bus access, but at the same time, it also increases the difficulty of building and managing the entire distributed system. It is necessary to arrange and manage services. With the rise of microservices, the whole technical stack of microservice ecology also needs seamless access. Can support the governance concept of micro services.

One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)

Nodes and networks


The traditional node is a single physical machine, and all services include services and databases. With the development of virtualization, a single physical machine can be divided into multiple virtual machines to maximize the use of resources, and the concept of node also becomes a single virtual machine service. In recent years, after the container technology matures, the service has been completely containerized, that is to say Nodes are just lightweight container services. In general, a node is a collection of logical computing resources that can provide unit services.


The foundation of distributed architecture is network. No matter LAN or public network, computers can not work together without network, but network also brings a series of problems. The spread of network messages has a sequence. Message loss and delay are frequent events. We define three network working modes:

  • Synchronous network

    • Node synchronization execution
    • Limited message latency
    • Efficient global lock
  • Semisynchronous network

    • Lock range extended
  • Asynchronous network

    • Node independent execution
    • Message delay unlimited
    • No global lock
    • Some algorithms are not feasible

There are two protocols in common network transmission layer.

  • TCP protocol

    • First of all, TCP protocol is reliable, although other protocols can transmit faster
    • TCP solves the problem of repetition and disorder
  • UDP protocol

    • Constant data flow
    • Packet loss is not fatal

Time and order


In slow physical time and space, time flows alone. For serial transactions, it’s easy to follow the pace of time, first come, then come. Then we invented the clock to depict the time points that happened in the past. The clock keeps the world in order. But for the distributed world, dealing with time is a real pain.

In the distributed world, we need to coordinate the early and late relationship between different nodes, and different nodes themselves admit different time, so we created the network time protocol (NTP) to try to solve the standard time between different nodes, but the performance of NTP itself is not satisfactory, so we constructed a logical clock, and finally improved it to vector clock:

  • Some shortcomings of NTP can not fully meet the coordination problem of concurrent tasks under distributed environment

    • Time out of sync between nodes
    • Hardware clock drift
    • Thread may sleep
    • Operating system sleep
    • Hardware sleep

One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)

  • Logical clock

    • Define events first come first
    • t’ = max(t, t_msg + 1)

One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)

  • Vector clock

    • t_i’ = max(t_i, t_msg_i)
  • atomic clock


With the tools to measure time, it’s natural to solve the problem of sequence. Because the whole distributed theory is based on how to negotiate the consistency of different nodes, and the order is the basic concept of consistency theory, so we need to spend time to introduce the scale and tools to measure time.

Consistency theory

When it comes to consistency theory, we must look at a comparison chart of the impact of consistency on System Construction:

One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)

The graph compares the balance of transaction, performance, error and delay under different consistency algorithms.

Strong consistency acid

In the stand-alone environment, we have strict requirements for the traditional relational database. Due to the network delay and message loss, acid is the principle to ensure transactions. These four principles are familiar to us even without explanation:

  • Atomicity: atomicity, all operations in a transaction are either completed or not, and will not end at a certain link in the middle;
  • Consistency: consistency. Before and after the transaction, the integrity of the database is not damaged.
  • Isolation: isolation: the ability of a database to allow multiple concurrent transactions to read, write and modify its data at the same time. Isolation can prevent data inconsistency caused by cross execution when multiple transactions are executed concurrently.
  • Durability: after the transaction is completed, the data will be modified permanently, even if the system fails.

Distributed consistency cap

Under the distributed environment, we can not guarantee the normal connection of the network and the transmission of information, so we have developed three important theories: Cap / FLP / DLS.

  • Cap: it is impossible for a distributed computing system to ensure consistency, availability and partition tolerance at the same time.
  • FLP: in asynchronous environment, if there is no upper limit of network delay between nodes, as long as there is a malicious node, no algorithm can reach a consensus in a limited time;
  • DLS:

    • In a partially synchronous network model (that is, the network delay is bounded but we don’t know where) the protocol can tolerate 1 / 3 arbitrary (in other words, Byzantine) errors.
    • The deterministic protocol in an asynchronous model (without the upper limit of network delay) is not fault-tolerant (but this paper does not mention that the randomization algorithm can tolerate 1 / 3 of the errors);
    • The protocol in the synchronization model (the network delay can be guaranteed to be less than the known D-time) can surprisingly achieve 100% fault tolerance, although there are restrictions on the situation that 1 / 2 of the nodes can make errors.

Weak consistency base

In most cases, in fact, we do not necessarily require strong consistency. Some businesses can tolerate a certain degree of delay consistency. Therefore, in order to give consideration to efficiency, we developed the final consistency theory base. Base refers to basic available, soft state and final consistency.

  • Basic availability: basic availability refers to the allowable loss of partial availability of distributed system in case of failure, that is, to ensure core availability;
  • Soft state: soft state means that the system is allowed to have an intermediate state, which will not affect the overall availability of the system. Generally, there are at least three copies of a data in distributed storage, and the delay of synchronization between different nodes is the embodiment of soft state.
  • Final consistency: final consistency refers to the consistency of all data copies in the system after a certain period of time. In contrast to strong consistency, weak consistency is a special case of weak consistency.

Consistency algorithm

The core of distributed architecture lies in the realization and compromise of consistency, so how to design a set of algorithm to ensure the communication and data between different nodes to achieve infinite trend consistency is very important. It is very difficult to ensure that different nodes can achieve the same replica consistency in the uncertain network environment. The industry has also done a lot of research on this topic.

First, we need to understand the big premise of consistency.Principle (CALM):
The full name of calm principle is consistency and logical monocity. It mainly describes the relationship between monotonic logic and consistency in distributed systems. Its contents are as follows. Refer to consistency as logical monocity.

  • In the distributed system, monotonous logic can guarantee “final consistency”, which does not depend on the scheduling of the central node.
  • In any distributed system, if all nonmonotonic logic has central node scheduling, then the distributed system can achieve the ultimate “consistency”.

And then focus on thedata structure CRDT(Conflict-Free Replicated Data Types):
After we understand some rules and principles of distribution, we should consider how to realize the solution. The premise of consistency algorithm is data structure, or the foundation of all algorithms is data structure. Well designed data structure and sophisticated algorithm can effectively solve the real problems. Through the continuous exploration of predecessors, we know that the distributed system is widely used data structure crdt.
Refer to a comprehensive study of conversion and commercial replicated data types

  • State based: that is, the crdt data between each node is directly merged, and all nodes can finally merge into the same state, and the order of data merging will not affect the final result;
  • Operation based: notifies other nodes of every operation on data. As long as the node knows all the operations on the data (the order of the received operations can be arbitrary), it can be merged into the same state.

After understanding the data structure, we need to pay attention to some important aspects of distributed system.AgreementHATs(Highly Available Transactions),ZAB(Zookeeper Atomic Broadcast):
Refer to high availability transactions, Zab protocol analysis

The last thing to learn is the consistency of the industry’s mainstreamalgorithm :
To be honest, I haven’t fully understood the specific algorithm. The consistency algorithm is the core content of the distributed system. The development of this part will also affect the innovation of the architecture, and the application of different scenarios will also give birth to different algorithms.

  • Paxos: elegant Paxos algorithm
  • Raft: raft consistency algorithm
  • Gossip:《Gossip Visualization》

In this section, we finish the core theoretical basis of distributed system, how to achieve data consistency between different nodes, and we will talk about which mainstream distributed systems currently exist.

Scene classification

file system

The storage of a single computer always has the upper limit. With the emergence of the network, the Cooperative Storage of files by multiple computers has also been proposed. The earliest distributed file system is also called network file system. The first file server was developed in 1970s. In 1976, digito company designed file access listener (FAL), and the modern distributed file system came from the famous Google paper. The Google file system laid the foundation of distributed file system. Modern mainstream distributed file systems refer to the comparison of distributed file systems. Here are some common file systems:

  • HDFS
  • FastDFS
  • Ceph
  • mooseFS

data base

Of course, database also belongs to the file system. The main data has the advanced features of transaction, retrieval, erasure and so on, so the complexity has also increased. It is necessary to consider the data consistency and ensure sufficient performance. In order to take into account the characteristics of transaction and performance, the traditional relational database has limited development in the aspect of distribution. The non relational database has got rid of the strong consistency constraint of transaction and achieved the final consistency effect, so it has a leap forward development. NoSQL (not only SQL) has also produced database types of multiple architectures, including kV, column storage, document types, etc.

  • Columnar storage: HBase
  • Document storage: elasticsearch, mongodb
  • KV type: redis
  • Relational: spanner


The distributed computing system is built on the basis of distributed storage. It gives full play to the redundant disaster recovery of data in the distributed system. Multiple copies can obtain data efficiently, and then parallel computing can split the tasks that originally need long-time computing into multiple tasks for parallel processing, thus improving the computing efficiency. Distributed computing system can be divided into offline computing, real-time computing and streaming computing.

  • Offline: Hadoop
  • Real time: Spark
  • Streaming: storm, Flink / blink


Cache is a powerful tool to improve performance everywhere, from CPU cache architecture to distributed application storage. The distributed cache system provides a random access mechanism for hot data, which greatly improves the access time. However, the problem is how to ensure the consistency of data, and the introduction of distributed locks to solve this problem. The mainstream distributed storage system is basically redis.

  • Persistence: redis
  • Non persistent: Memcache


Distributed message queuing system is a powerful tool to eliminate a series of complex steps brought by asynchrony. In the multi-threaded and high concurrency scenario, we often need to design business code carefully to ensure that there is no deadlock caused by resource competition in the case of multi-threaded concurrency. The message queue stores asynchronous tasks in the queue in a delayed consumption mode, and then digests them one by one.

  • Kafka
  • RabbitMQ
  • RocketMQ
  • ActiveMQ


With the development of distributed system from single machine to cluster, the complexity is also greatly increased, so it is necessary to monitor the whole system.

  • Zookeeper


The core module of the distributed system is how to deal with the business logic in the application. The direct call of the application depends on the specific protocol to communicate. There are RPC based and general HTTP based protocols.

  • HSF
  • Dubbo


Error corresponding distributed system is a common practice, and when we design the system, we need to consider fault tolerance as a common phenomenon. So when there is a fault, it is very important to quickly recover and troubleshoot it. Distributed log collection, storage and retrieval can provide us with powerful tools to locate the problems in the request link.

  • Log collection: flume
  • Log storage: elasticsearch / Solr, SLS
  • Log location: Zipkin

Account book

We mentioned earlier that the so-called distributed system is due to the limited performance of the single machine, while the heap hardware can not increase endlessly, and the single machine heap hardware will eventually encounter the bottleneck of the performance growth curve. So we use multiple computers to do the same work, but such a distributed system always needs a centralized node to monitor or schedule system resources, even if the central node may be composed of multiple nodes. Blockchain is a real District centered distributed system, in which only P2P network protocols communicate with each other, and there is no real central node. They coordinate the generation of new blocks according to the computing power, equity and other mechanisms of blockchain nodes.

  • Bitcoin
  • Ether square

Design pattern

In the previous section, we listed the roles and functions of different distributed system architectures in different scenarios. In this section, we further summarized how to consider the architecture design, the direct differences and emphases of different design schemes, and how to choose the cooperative design mode in different scenarios to reduce the cost of trial and error when designing distributed systems. Next question.


Availability is the proportion of time a system is running and working, usually measured as a percentage of uptime. It can be affected by system errors, infrastructure problems, malicious attacks, and system load. Distributed systems often provide service level agreements (SLAs) for users, so applications must be designed to maximize availability.

  • Health check: the system implements full link function check, and external tools access the system regularly through open endpoints
  • Load balancing: use queues as a buffer between requests and services to smooth intermittent heavy loads
  • Throttling: limits the range of resources consumed by application levels, tenants, or the entire service

data management

Data management is the key element of distributed system and affects most quality attributes. Due to performance, scalability, or availability, data is often hosted in different locations and multiple servers, which can pose a series of challenges. For example, data consistency must be maintained, and data typically needs to be synchronized across different locations.

  • Caching: load data from the data storage tier into the cache as needed
  • Cqrs (command query responsibility segregation): command query responsibility segregation
  • Event traceability: only use append method to record complete series of events in the domain
  • Index tables: creating indexes on fields that are frequently queried and referenced
  • Materialized view: generate one or more data pre populated views
  • Split: split data into horizontal partitions or slices

Design and Implementation

Good design includes factors such as consistency of component design and deployment, maintainability of simplified management and development, and reusability of allowing components and subsystems to be used in other applications and other solutions. Decisions made in the design and implementation phase have a huge impact on distributed systems, service quality and total cost of ownership.

  • Agents: reverse agents
  • Adapter: implementing the adapter layer between modern applications and legacy systems
  • Front end and back end separation: back end services provide interfaces for front-end applications to call
  • Computing resource integration: combining multiple related tasks or operations into one cell
  • Configuration separation: move configuration information from application deployment package to configuration center
  • Gateway aggregation: use a gateway to aggregate multiple individual requests into one request
  • Gateway uninstall: uninstall shared or private service features to the gateway agent
  • Gateway Routing: route requests to multiple services using a single endpoint
  • Leader election: coordinate the cloud of distributed system by selecting an instance as the administrator responsible for managing other instances
  • Pipes and filters: break down complex tasks into a series of individual components that can be reused
  • Sidecar: deploy the monitoring components of the application to a separate process or container to provide isolation and encapsulation
  • Static content Hosting: deploy static content to CDN to accelerate access efficiency


Distributed systems need a message passing middleware to connect components and services. Ideally, it is loosely coupled to maximize scalability. Asynchronous messaging is widely used and provides many benefits, but it also brings challenges such as message ordering and idempotency.

  • Competitive consumers: multithreading and concurrent consumption
  • Priority queue: message queues are divided into priority queues, and those with high priority are consumed first

Management and monitoring

The distributed system runs in the remote data center and can’t control the infrastructure completely, which makes the management and monitoring more difficult than the stand-alone deployment. Applications must expose run-time information that administrators can use to manage and monitor systems and support changing business requirements and customizations without stopping or redeploying applications.

Performance and expansion

Performance represents the responsiveness of the system to perform any operation within a given time interval, while scalability is the ability of the system to handle load growth without affecting performance or easily increasing available resources. Distributed systems often encounter changing load and activity peaks, especially in multi tenant scenarios, which are almost impossible to predict. Instead, applications should be able to scale within limits to meet peak demand and scale when demand is reduced. Scalability involves not only computing instances, but also other elements, such as data storage, message queuing, and so on.


Resilience is the ability of a system to gracefully handle and recover from failures. Distributed systems are usually multi tenant, using shared platform services, competing resources and bandwidth, communicating over the Internet, and running on commercial hardware, which means the possibility of transient and more permanent failures increases. In order to maintain flexibility, it is necessary to detect and recover faults quickly and effectively.

  • Isolate: isolates elements of the application into the pool so that when one fails, the other elements will continue to run
  • Circuit breakers: failures that may require different times of repair when connected to a remote service or resource
  • Compensation transaction: cancels the work performed by a series of steps, which together define the final consistent operation
  • Health check: the system implements full link function check, and external tools access the system regularly through open endpoints
  • Retry: transparently retry previously failed operations, allowing applications to handle expected temporary failures when trying to connect to a service or network resource


Security is the ability of the system to prevent malicious or unexpected behavior outside of design use, and to prevent the disclosure or loss of information. The distributed system runs on the Internet outside the trusted local boundary, which is usually open to the public and can provide services for the untrusted users. Must protect applications from malicious attacks, restrict access only to approved users, and protect sensitive data.

  • Federation: delegate authentication to an external identity provider
  • Gatekeeper: protects applications and services by using a dedicated host instance that acts as a proxy between clients and applications or services, validates and cleans requests, and passes requests and data between them
  • Valet key: use a token or key that provides clients with limited direct access to a specific resource or service

engineering application

In the previous article, we introduced the core theory of the distributed system, some problems faced and the compromise ideas to solve the problems, listed the classification of the existing mainstream distributed system, and summarized some methodology of building the distributed system. Then we will introduce the content and steps of building the distributed system with the real gun from the engineering point of view.

Resource scheduling

It’s hard to make bricks without rice. All our software systems are built on the basis of hardware servers. From the initial physical machine directly deploying software system, to the application of virtual machine, and finally to the cloud containerization of resources, the use of hardware resources also began intensive management. This section compares the responsibilities of traditional operation and maintenance roles. In the environment of Devops, the integration of development and operation and maintenance is to realize the flexible and efficient use of resources.


In the past, when the software system needs to increase machine resources with the increase of users, the traditional way is to find the operation and maintenance application machine, and then deploy the software service to access the cluster. The whole process depends on the human experience of the operation and maintenance personnel, which is inefficient and prone to errors. With the support of containerization technology, we only need to apply for cloud resources and execute container scripts.

  • Application expansion: user surge needs to expand services, including automatic expansion and automatic reduction after peak
  • Machine offline: for outdated applications, offline the application, and the cloud platform reclaims the container host resources.
  • Machine replacement: for the failed machine, it can replace the container host resources, automatically start the service, and switch seamlessly.

Network management

With computing resources, the other most important is network resources. In the current cloud environment, we hardly touch the physical bandwidth resources directly, but directly manage the bandwidth resources by the cloud platform. What we need is the maximum application and effective management of network resources.

  • Domain name application: application for supporting domain name resources and specification of multiple domain name mapping rules
  • Domain name change: unified platform management of domain name change
  • Load management: access policy setting for multi machine applications
  • Security outreach: basic access authentication, blocking illegal requests
  • Unified access: provide unified access authority application platform and unified login management

Fault snapshot

When the system fails, our first priority is system recovery, and it is also very important to keep the scene of the crime. The resource scheduling platform needs to have a unified mechanism to keep the scene of the failure.

  • On site reservation: memory distribution, thread number and other resource phenomena, such as javadump hook access
  • Debugging access: bytecode technology can be used for site log debugging of production environment without intrusion of business code.

Traffic scheduling

After we have built a distributed system, the gateway is the first gateway to be tested, and then we need to pay attention to the system traffic, that is, how to manage the traffic. What we pursue is to leave the resources to the best quality traffic within the upper limit of the system traffic, and block the illegal and malicious traffic outside, so as to save costs and ensure that the system does not It’s going to crash.

load balancing

Load balancing is a general design of how services digest traffic. It is usually divided into the hard load balancing of the physical layer and the soft load of the software layer. Load balancing solution is a mature solution in the industry. We usually optimize it for specific business in different environments. There are usually the following load balancing solutions

  • Switch
  • F5
  • Nginx/Tengine
  • VIPServer/ConfigServer

Gateway Design

The gateway is the first to bear the brunt of load balancing, because the gateway is the first place where the centralized cluster traffic hits. If the gateway can not bear the pressure, the whole system will not be available.

  • High performance: the first thing to be considered in gateway design is high-performance traffic forwarding. A single node of a gateway can usually achieve millions of concurrent traffic.
  • Distributed: for flow pressure sharing and disaster recovery, the gateway design also needs to be distributed
  • Business filtering: the gateway is designed with simple rules to eliminate most malicious traffic

Traffic management

  • Request verification: how many illegal requests can be intercepted and cleaned by request authentication
  • Data cache: most stateless requests have data hotspots, so using CDN can consume a large part of the traffic

Flow control

For the rest of the real traffic, we use different algorithms to split the requests.

  • Traffic assignment

    • Counter
    • queue
    • funnel
    • Token bucket
    • Dynamic flow control
  • When the flow limit is in the period of flow surge, we usually need limited flow measures to prevent the system from avalanche, so we need to estimate the upper limit of the system flow, and then set the upper limit number, but when the flow increases to a certain threshold, the extra flow will not enter the system, and the system availability will be preserved by sacrificing part of the flow.

    • Current limiting strategy
    • QPS granularity
    • Thread number granularity
    • RT threshold
    • Current limiting tool Sentinel

Service scheduling

The so-called iron making still needs to be hard. After the traffic is well scheduled and managed, the rest is the robustness of the service itself. It is common for distributed system services to fail, and even we need to consider the failure itself as a part of distributed services.

Registry Center

In the section of network management, we introduce the gateway, which is the hub of traffic, and the registry is the base of service.

  • Status type: the first good application service status. Through the registry, you can check whether the service is available.
  • Life cycle: different states of application services make up the application life cycle

version management

  • Cluster version: a cluster does not need to have its own version number, and a cluster composed of different services also needs to define a large version number.
  • Version rollback: rollback management can be performed according to the large cluster version when deploying exceptions

Service Orchestration

The definition of service choreography is to control the interaction of various parts of resources through the interaction sequence of messages. The resources involved in the interaction are all equal and there is no centralized control. In the micro service environment, we need to have a general coordinator to deal with the dependency and call relationship between services. K8s is our best choice.

  • K8s
  • Spring Cloud

    • HSF
    • ZK+Dubbo

service control

Previously, we solved the problem of network robustness and efficiency. This section describes how to make our services more robust.

  • In the discovery resource management section, we introduced that after applying for the container host resource from the cloud platform, the application service can be started through the automatic script. After the start, the service needs to discover the registry and register its service information to the service gateway, that is, gateway access. The registration center will monitor the different status of the service, do a health check, and mark the unavailable services.

    • Gateway access
    • Health examination
  • Demotion: when the number of users is increasing rapidly, we first do things at the traffic end, that is, limit the flow. When we find that the system response becomes slower after current restriction, which may lead to more problems, we also need to do some operations on the service itself. Service degradation is to turn off the current functions that are not very core, or to relax the scope of accuracy that is not very important, and then do some manual remediation afterwards.

    • Reduce consistency constraints
    • Shut down non core services
    • Simplified function
  • Fusing: when we have done the above operations, we still feel uneasy, so we need to worry about it further. Fusing is a kind of self-protection for overload, just like our switch tripping. For example, when our service constantly queries the database, if the business problem causes the query problem, the database itself needs to be fused to ensure that it will not be dragged down by the application, and access friendly information to tell the service not to call blindly.

    • Closed state
    • Half open state
    • Disconnected state
    • Fuse tool – hystrix
  • Idempotent: we know that the characteristic of an idempotent operation is that its arbitrary multiple executions have the same effect as one execution. Then we need to give a global ID to a single operation for identification, so that we can judge the source from the same client after multiple requests to avoid dirty data.

    • Global consistency ID
    • Snowflake

Data scheduling

The biggest challenge of data storage is the management of data redundancy. More redundancy leads to low efficiency and takes up resources. Less copies do not play a role in disaster recovery. Our common practice is to transform the requests with transition state into stateless requests through transition state separation.

state transition

For example, we usually cache the login information to the global redis middleware, instead of redundant users’ login data in multiple applications.

Sub library table

Data scale out.

Fragmentation zoning

Multi copy redundancy.

Automatic operation and maintenance

We introduced the trend of Devops from the time of resource application management. To achieve the integration of development, operation and maintenance, different middleware is needed to cooperate.

Configuration center

The global configuration center is divided by environment and managed uniformly, which reduces the confusion of multiple configurations.

  • switch
  • diamend

Deployment strategy

Distributed deployment of microservices is a common practice. How to make our services better support the business development? We need to consider robust deployment strategies first. The following deployment strategies are suitable for different businesses and different stages.

  • Downtime deployment
  • Rolling deployment
  • Blue green deployment
  • Grayscale deployment
  • A/B test

job scheduling

Task scheduling is an essential part of the system. The traditional way is to configure crond timing task on Linux machine or to complete scheduling business directly in business code. Now it is replaced by mature middleware.

  • SchedulerX
  • Spring scheduled tasks

Application management

A large part of the operation and maintenance work needs to restart the application, up and down line operation, and log cleaning.

  • Application restart
  • Application offline
  • Log cleaning

fault tolerant

Since we know that distributed system failure is a common occurrence, the solution to failure is also an indispensable part. Usually we have active and passive ways to deal with:

  • The initiative is to try again when a mistake occurs. Maybe it will succeed. If it succeeds, we can avoid the mistake.
  • The passive way is that the wrong things have happened. In order to recover, we just need to do something to minimize the negative impact.

Retry design

The key to retry design is to design the time and number of retries. If the number of retries is exceeded, or for a period of time, then retry is meaningless. Spring retry, an open source project, can well implement our plan of retry.

Transaction compensation

Transaction compensation is in line with our ultimate consistency philosophy. Compensation transactions do not necessarily return data in the system to the state it was in at the beginning of the original operation. Instead, it compensates for the work performed by the steps that were successfully completed before the operation failed. The order of the steps in the compensation transaction is not necessarily the exact opposite of the order of the steps in the original operation. For example, one data store may be more sensitive to inconsistency than another, so the steps to undo changes to this store in a compensation transaction should occur first. Using short-term time-out based locks on each resource required to complete an operation and acquiring those resources in advance can help increase the probability of overall activity success. Work should only be performed after all resources have been obtained. All operations must be completed before the lock expires.

Full stack monitoring

Because the distributed system is a system which is co operated by many machines, and the network can’t be fully available, so we need to build a set of system which can monitor all links, so that we can monitor from the bottom to all levels of the business, in case of an accident, we can repair the fault in time and avoid more problems.

Foundation layer

The basic level is the monitoring of container resources, including the load of each hardware index.

  • CPU, IO, memory, thread, throughput


The distributed system has access to a large number of middleware platforms, and the health of middleware itself needs to be monitored.

application layer

  • Performance monitoring: the application level needs to monitor the real-time indicators (QPS, RT), upstream and downstream dependencies of each application service.
  • Business monitoring: in addition to the monitoring degree of the application itself, business monitoring is also a link to ensure the normal system. Through the design of reasonable business rules, alarm settings are made for abnormal situations.

Monitoring link

  • zipkin/eagleeye
  • sls
  • goc
  • Alimonitor

Fault recovery

When a fault has occurred, the first thing we need to do is to eliminate the fault immediately to ensure the normal availability of system services. At this time, we usually do a rollback operation.

Application rollback

Before applying rollback, you need to save the fault site for troubleshooting.

Baseline regression

After the application service is rolled back, the code baseline also needs to revert to the previous version.

Version rollback

Overall rollback requires service choreography, which rolls back the cluster through large version number.

performance tuning

Performance optimization is a major topic of distributed system, which covers a wide range of areas. This can be taken out as a series alone. This section will not be expanded first. Our process of service governance is also the process of performance optimization.
Refer to high concurrent programming knowledge system

Distributed lock

Caching is a powerful tool to solve performance problems. Ideally, every request can get the result as quickly as possible without extra computation. From the three-level cache of CPU to the distributed cache, the cache is everywhere. What the distributed cache needs to solve is the data consistency. At this time, we introduced the concept of distributed lock. How to deal with the problem of distributed lock will determine the efficiency of our cache data acquisition.

High concurrence

Multithreaded programming mode improves the throughput of the system, but also brings the complexity of the business.


Event driven asynchronous programming is a new programming mode, which abandons the multithreaded complex business processing problem and improves the response efficiency of the system.


Finally, if possible, try using a single node approach instead of a distributed system. The distributed system is accompanied by some failed operations. In order to deal with catastrophic failures, we use backup; in order to improve reliability, we introduce redundancy.

The essence of distributed system is the cooperation of a bunch of machines. What we have to do is to figure out various means to make the machines run as expected. Such a complex system needs to understand the access of various links and middleware, which is a very large project. Fortunately, in the context of microservices, most of the basic work has been done. The distributed architecture described above can be basically constructed by using docker + k8s + srping cloud in the project implementation.

The core technologies of distributed architecture are as follows:

One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)


Middleware is used in distributed technology stack:

One article reading and understanding of distributed architecture knowledge system (including super core knowledge map)


“Alibaba cloud native wechat public account (ID: alicloudnative) focuses on the technology fields such as microservice, serverless, container, service mesh, the trend of jujiao cloud native popular technology, the large-scale implementation practice of cloud native, and makes the technical public account that most understands cloud native developers.”

Search “Alibaba cloud native public account” for more k8s container technology content

Recommended Today

Query SAP multiple database table sizes

Query SAP multiple database table sizes Item code db02 Here are two approaches, In the first graphical interface, the results of the query data table are displayed in MB, and only one table can be queried at a time. SPACE—Segments—Detailed Analysis—Detailed Analysis  In the pop-up window segment / object, enter the name of the […]