Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?


Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

Today, we are going to talk about how to decouple the micro services under the micro service architecture, and how to reconstruct the tightly coupled micro services. To understand that in fact, many problems in the follow-up of microservice are often caused by the unreasonable division of microservice modules at the beginning. For specific module division methods and principles, I summarize the following points.

  • Principle 1: divided into < 10 micro service modules
  • Principle 2: do not split the strong data association module
  • Principle 3: data aggregation drives business function aggregation
  • Principle 4: change from vertical function division to horizontal stratification
  • Principle 5: the basic principle of high cohesion and loose coupling

The specific content will not be repeated in this article. It can be seen that the splitting of microservice modules is more about business modeling and system analysis, and the focus of today’s discussion on microservice decoupling is to discuss the following decoupling methods and strategies from the available technical means.

A summary of the problems

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

In recent years, micro service architecture, enterprise platform construction, componentization and servitization, platform + application construction mode, including docker containerization, Devops and so on, have attracted more and more attention of traditional enterprises. It can also be seen that many traditional enterprise architectures are also evolving and transforming towards this goal. As for the advantages and disadvantages of the microservice architecture itself, including the evolution route of traditional enterprises to implement the microservice architecture, I have introduced many articles related to the microservice architecture in front of me. Today, I will mainly talk about the decoupling problem under the microservice architecture.

It should be understood that after the implementation of microservice architecture, all internal interface calls and internal complete transactions will become cross domain interface service calls between microservice modules, and traditional transactions will also become distributed transactions, which increase the complexity of the system.

What can be done by a system in the past depends on the underlying technology component services, and other business micro service modules can only be completed by calling multiple HTTP rest API interfaces. As long as any of the API interface problems will directly affect the use of front-end business functions.

Avalanche effect among microservices: after the adoption of microservice architecture, there are a large number of API interface service calls among each microservice, and a service call chain is formed between each other, such as a – 、 B – 、 C. if C service fails, it will directly affect the normal access and service blocking of B service, and the failure of B will further transmit to the consumption and use of a service.

There are several key points in the implementation of micro service architecture for Internet enterprises.

  • One is that the microservice architecture can better expand the performance of the platform and meet high scalability requirements.
  • The other is that the business rules of Internet applications are relatively simple, and it is easy to decouple between modules.
  • The third is that large Internet enterprises have stronger it technology accumulation, better technology to build a highly available technology platform, and better technology to realize automatic operation and maintenance and monitoring after the implementation of micro service architecture.

For example, some enterprises didn’t find any problems in the initial implementation of the microservice architecture, but found that the follow-up system operation and maintenance, performance monitoring, fault analysis and troubleshooting can’t keep up, and they can’t timely respond to customer needs and quickly locate and solve problems.

That is to say, the IT Governance and team technical ability of traditional enterprises can not keep up, which directly affects the success or failure of microservice architecture implementation.

Let’s get back to the point. Today, we hope to discuss and analyze how to minimize the functional implementation and operation failure of a single microservice module caused by the coupling of microservice modules after the implementation of microservice architecture. In short, it is the operation of business functions in a microservice module, and how to minimize the dependence on the availability of external microservice module HTTP API service interface. Even if the external module is linked, the current module can be used normally, or it can not affect the use of the core functions of the current module.

We can discuss this problem from several aspects separately.

Synchronous call to asynchronous call

When it comes to decoupling, we must first think of message middleware to realize asynchronous, that is, to turn synchronization into asynchrony and realize decoupling through asynchrony. We can send messages to the message middleware first. As long as the message middleware is highly available and there is no downtime, the whole interface integration process is OK, and the message middleware will distribute messages to the target system asynchronously and support retrying.

The adoption of Message Oriented Middleware

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

Message oriented middleware is a kind of middleware that supports and guarantees synchronous / asynchronous message sending and receiving between distributed applications. Message is the basic information unit for data exchange between distributed applications, and the communication interface between distributed applications is provided by message middleware. Among them, asynchronous mode means that the sender does not need to know the state of the receiver when sending a message, let alone wait for the reply of the receiver, and the receiver does not need to know the current state of the sender when receiving a message, let alone need synchronous message processing. The connection between them is completely loosely coupled, and the communication is non blocking. This asynchronous communication mode is controlled by the middle of the message The message queue and its service mechanism are guaranteed.

Message middleware decouples the publisher and subscriber in time, space and process

  • Time decoupling – the publisher and subscriber can transmit messages without being online at the same time. Message middleware provides the ability of asynchronous transmission through store and forward;
  • Spatial decoupling: publishers and subscribers do not need to know each other’s physical address, port, or even their logical name and number;
  • Process decoupling — publishers and subscribers do not block their control processes when sending and receiving data.

From the point of view of the basic functions of message middleware, whether it is point-to-point message middleware or message broker, its architecture is very clear and simple. However, the diversity and complexity of distributed applications and their environments lead to the complexity of message oriented middleware.

The current message middleware is still divided into two categories, one is based on AMQP advanced message protocol, the other is based on JMS message protocol. Rabbitmq and Kafka, which are widely used in the Internet, are based on AMQP. For Weblogic JMS, IBM MQ is a message middleware product based on JMS message protocol.

For Weblogic, it is an enterprise level application server middleware. At the same time, Weblogic JMS is also an enterprise level message middleware product, which has the basic characteristics of high reliability, high availability, high expansion and high performance. It supports various mainstream message models, message publish subscribe, message persistence, transaction processing, cluster and other core features.

The usage scenarios of message middleware include the following aspects:

  • Message notification: event notification after document status change and event notification after data transmission
  • Asynchronous integration: the service consumer only needs to send the data to OSB, that is, return it in real time, and realize complete decoupling through asynchronous integration
  • Peak shaving of target system: a scenario in which large concurrent data is imported and the performance of target system is limited
  • Message publish subscribe: the basic master data can be distributed one to many in real time through JMS
  • High reliability scenario: ensure no loss in data integration

For implementing message integration with Weblogic JMS, the specific process is as follows:

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

Business analysis based on event driven

To achieve synchronous to asynchronous, we must change our thinking from the beginning of business requirements analysis, that is, from the traditional business process requirements analysis method to the event driven analysis method. This was specifically mentioned when I organized the content of EDA event driven architecture very early. Today’s excerpt is for your reference.

In EDA, events can be transferred between loosely coupled components and services. An event driven system typically consists of event consumers and event producers. Event consumers subscribe events to event manager, and event producers publish events to event manager. When the event manager receives an event from the event producer, the event manager forwards the event to the corresponding event consumer. If the event consumer is not available, the event manager will keep the event and relay the event consumer again after a period of time.

EDA architecture often has the following characteristics:

  • Broadcast communication: the system participating in the communication broadcasts the event to any participant interested in the event.
  • Real time: when business events occur, EDA architecture can send events to consumers in real time without waiting
  • Asynchronous: the event publishing system does not need to wait for the event receiving system to process the event, and it can send it back to the EDA module.
  • Fine grained: as long as it has independent business value, it can be published as fine-grained events instead of coarse-grained events under traditional services.
  • Complex event processing: according to the business process requirements, events can be aggregated and assembled to form an event chain to meet the needs of complex event processing.
  • Parallel running: multiple events can run at the same time, and a single event can be distributed to multiple subscribers at the same time.
  • Non blocking: EDA itself provides message persistence mechanism such as MQ, so event blocking will not occur in the event of large concurrency.

In short, message integration, asynchrony, complete decoupling, message publish subscribe and event chain are the core of EDA architecture. But in EDA, including CEP complex event processing, we should first understand the difference between EDA and traditional process driven business analysis methods. In short, a simple comparison between process driven and event driven can be described as follows:

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

Explanation of core business analysis based on EDA

In the event driven architecture, the core of business analysis is event identification. For the traditional method, it is often the key processes and activities. In terms of the change of the overall analysis idea, the traditional analysis method only analyzes the level 2-3 business process, identifies the business activities and interaction points, while EDA needs to analyze the lowest level EPC event flow chart at level L4, and identify the key business events and event decomposition and aggregation.

In terms of the changes in the specific analysis content, the traditional method only cares about the business activities, not the specific start mechanism of the business activities, and the business events generated after the completion of the business activities. Based on EDA business analysis method, business activities need to be opened to identify the triggering conditions of business activities and the changes of business object state caused by business activities. Often, the state change points are the key event identification points.

The details can be described as follows:

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

Simple event use case analysis and identification based on business requirements

Business event identification can start from business requirement use cases, analyze business pre trigger conditions in business use cases, analyze state flow process and subsequent operations of business objects, so as to find event input and event generation of business activities.

As can be seen from the figure below, the identification of events is often more detailed than the identification of use cases. It is necessary to analyze the basic flow, extension flow and business rules in the business use cases in detail, especially pay attention to the changes of the core business objects and document status. At the same time, we also need to focus on the trigger conditions in use case analysis, which are often the source of event chain formation, or trigger message event subscription.

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

Complex events – forming event chain based on event recognition

Traditional business analysis methods based on process usually only analyze the business process and specific business activities, but don’t care about the business events generated before or after the implementation of specific business activities, which is related to the early focus on data integration of the interface platform. In order to ensure real-time business response requirements, we must accurately identify business events, and then further design the processing and response mechanism based on business events. Based on the analysis idea of EPC event process chain, it is necessary to refine the traditional analysis process, add red event identification points and event decomposition aggregation relationship. In the process of event chain formation, there are often some complex scenarios that need to be analyzed, including one to many distribution and subscription of events, as well as multiple event aggregation. The next new business activity and new event will be triggered only after a specific business rule is met. These are the contents that must be considered in complex event analysis.

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

From EDA event driven to cqrs

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

As the name suggests, cqrs is the separation of command query responsibility, which separates cud operation and R query operation. For cud operation, it still refers to the traditional domain model modeling idea, but the message event mechanism is added in the command to realize the asynchronous writing of cud operation changes to the database through message events.

In cqrs, in the aspect of query, query the database directly through methods, and then return the data through dto. The operation in this aspect is relatively simple. On the other hand, commands are processed by sending specific commands, which are then distributed by the commandbus to specific commandhandle. When the commandhandle processes, it does not directly save the state of the object to the external persistent structure, but only obtains a series of domain events from the domain object and saves these events to the event At the same time, it publishes the event to the event bus for the next processing; then the event bus also coordinates and hands over the specific event to the specific event handle for processing, and finally the event handler saves the state of the object to the corresponding query database.

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

For cqrs, the most easy to think of is the read-write separation mode at the database level. It can be seen that the read-write separation mode of cqrs itself and the database can better match. Due to the adoption of event driven and message subscription mode, it is easier for us to update the data change information for R reading database, so as to achieve the timely and synchronous update of the Reading database data. At the same time, the read-write database can be separated from the read-write database, and the distributed and unstructured data such as Solr and NoSQL can be used to realize the elastic horizontal expansion ability.

When the command query responsibilities are not separated, we can see that on the one hand, the scalability of the model itself is affected, on the other hand, the original domain model itself is biased, and the entity entity itself is also transmitted through the complete dto object, so that when some special fields only need to be updated or queried, the whole model is still biased.

The decoupling of command query responsibilities is not only to improve the scalability of the whole framework model, but also to completely decouple the two types of business rules and implementation, so as to facilitate the subsequent function development and operation and maintenance. Especially in the case of complex business scenarios and logic implementation, this decoupling will make the whole development architecture clearer and simpler.

At the same time, we can also see that a command is written in the form of asynchronous events, so there is no problem of synchronization and long connection occupation, which is conducive to improving the overall response performance of the whole platform under large concurrency.

Of course, the biggest problem with cqrs mode is that it is impossible to guarantee the strong consistency between the command and the query. That is, it is very likely that the data you query on the interface is not the data in the latest persistent database, which itself has something to do with the real-time performance of asynchronous writing in the message pipeline.

Secondly, when using cqrs mode, there is an important assumption that after the event and command are issued, the event receiver must be able to receive the event and handle it successfully without special circumstances. Otherwise, there will be a large number of abnormal error messages written back asynchronously, which will increase the complexity of the system. For example:

When we buy a product on the e-commerce platform, as long as the order is submitted successfully, the order will be able to take effect, and there must be inventory to be able to ship and deliver. It is not that we find that there is no inventory in the subsequent delivery phase, which leads to the cancellation of the order. If so, it will greatly reduce the ease of use of the system itself.

That is, in the asynchronous command and event sending scenario, when the command is sent successfully, although we do not receive the event processing result information of the handler in time, we default that the receiver can successfully process the event. But we also see that in the framework of cqrs scenario, as long as the command event is issued, we do not need to wait for any feedback information.

In addition, there is another cqrs implementation scenario, that is, although the internal command processing is based on the event mechanism and asynchronous response, the customer’s operation at the front end is to wait for the return synchronously. In this case, we can keep the front-end connection, but whether the back-end connection is similar to DB connection, etc.

Under the cqrs model, due to the separation of responsibilities, we can see that we can subscribe to multiple reading databases by subscribing to events and messages. These reading databases can be either structured or unstructured. They can be used to query and read the business functions themselves, and they can also be used for distributed full-text retrieval of massive data.

For the implementation of cqrs framework, it is not a simple problem of using design patterns. The more important thing is whether it can accept the final consistency requirement. At the same time, under this requirement, the traditional business function and logic processing mechanism under synchronous request is transformed into the event chain driven mode under asynchronous event value. To achieve this transformation, we must be able to separate independent and autonomous commands and events, and ensure that these events can be successfully processed when they are sent to the back-end business functions and logic modules (that is, the verification must be done in advance).

Converting synchronous interface calls to local message caching

This function is similar to message middleware. For example, we design an interface to synchronously send orders to ERP system. If there is an exception when synchronously calling this interface service in real time, we can first store the message locally, and then set timing task and retrial mechanism to send the message to the target system through retrial.

That is to say, for the business function, it doesn’t care whether the real-time message is sent successfully, but the business system’s own mechanism completes the message sending retrial. To achieve this, when designing the interface function, it is better to separate the document business integrity verification interface from the actual data sending interface, that is, first call the interface for integrity verification, and then send the message after there is no problem in the verification. To ensure that the final message will not be sent successfully due to data integrity.

Local caching or landing of query data

Memcached is a distributed cache system, developed by Brad Fitzpatrick of livejournal, but used by many websites. This is a set of open source software, authorized by BSD license.

Memcached API uses 32-bit cyclic redundancy check (CRC-32) to calculate the key value, and then distributes the data on different machines. When the table is full, the new data will be replaced by LRU mechanism. Because memcached is usually only used as a caching system, applications using memcached need extra code to update the data in memcached when they write back to slower systems (such as back-end databases).

For the real-time query interface, the basic data of the query is cached locally, that is, if the real-time query is abnormal, we can query the local cached data directly to reduce the impact on the use of business functions.

For example, query the supplier interface service. If the interface provided by the master data system is abnormal, we can directly query the locally cached supplier data. This mode is suitable for infrequently changed data, and reduces the performance loss caused by real-time calling interface.

If the interface service is registered on the API gateway or ESB service bus, we can also consider enabling caching capability on the ESB service bus, that is, for the interface that has been called, it can be obtained through caching data when the same parameter is repeatedly called. Even if the source business system is not available, it will not affect the successful call of the current interface service.

Data landing can be considered appropriately

In the microservice architecture, we have been emphasizing that real-time data needs to be accessed in real time without data integration and synchronization of the underlying database, which not only meets the requirements of high consistency of data, but also meets the requirements of real-time data.

But the problem is strong coupling. If the data provider is abnormal, the business function of the consumer can not be used. Backstage (Zhu Xiaosi’s blog) reply 1024, you can get k8s information.

Because we can appropriately consider the data integration of data landing mode. In the implementation process of the overall micro service architecture, we can moderately land the data that changes infrequently to the local micro service module. This can reduce the real-time business interface service calls and increase the availability and reliability of a single micro service module.

How to reconstruct strong coupling

If the microservice has been implemented and a large number of tightly coupled cases have occurred, we need to consider refactoring the microservice architecture in the later stage. The specific refactoring methods can be considered from the following points.

The two microservices are tightly coupled

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

If a large number of interfaces call each other between two microservices, it can be considered as tight coupling.

Or my original criterion, that is, the background data table corresponding to two microservices, in which more than 30% need cross access of two microservices, is that the two microservices are highly coupled.

In this case, the solution is that the original division of micro services is too detailed, and the two micro services need to be merged.

Cross dependence becomes common dependence

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

We should know that in traditional software development, two components are not allowed to depend on each other.

However, in the new IOC and microservice development, a large number of reflection calls are used, and the interdependence of the two components will not be a problem. But this is not a good design method in itself.

If two or more microservices depend on each other, the content itself has generality. Then the best way is to remove all the common content, sink into a common basic micro service module, and then provide services upward.

That is to say, cross dependence is transformed into common dependence on the bottom layer.

Migration of a microservice implementation unit

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

Why did this happen?

In short, the original division of micro service module and business function is unreasonable. For example, the A1 part of micro service a in the figure above. This part of the content needs to be called by microservice B, but A1 actually relies on the content of other parts of microservice a very little.

This is the typical A1 part function division position is unreasonable.

The best way is to migrate A1 function from microservice a to microservice B to correct the unreasonable division of original business.

Transforming fine-grained services into coarse-grained services

Super value dry goods: how to decouple under the micro service architecture and how to reconstruct under the tight coupling?

The service itself should have coarse-grained attributes, exposing only the content that needs to be exposed.

For example, micro service a realizes customer credit check and rating. Micro service B needs customer credit. There are two approaches

The first is that B calls multiple interfaces of a to query the basic information, transaction information and default information of customers, and then calculate the customer credit by itself.

The second one is that only the customer code needs to be entered, and micro service a returns the earliest credit rating.

For the latter, which is often referred to as coarse-grained interface or domain service, the interaction between services should focus on domain service and coarse-grained service, and avoid complete crud service interface of database table.

Write at the end

Welcome to my official account.Calm as a yard】, massive Java related articles and learning materials will be updated in it, and the sorted materials will also be put in it.

If you think the writing is good, just like it and pay attention to it! Focus, don’t get lost, keep updating!!!