On February 13, 2020, Apache Dubbo officially released a notice of deserialization vulnerability (vulnerability No.: cve-2019-17564), which is of moderate vulnerability level.Although Qingzhou microservice supports Apache Dubbo, due to the optimization of architecture design, users can be guaranteed not to be affected by this vulnerability.
Principle of vulnerability and scope of influence
Apache Dubbo supports a variety of communication protocols. The official default is Dubbo protocol. When users choose HTTP protocol for communication, attackers can use this vulnerability to remotely execute malicious code on the target website, resulting in website control, data disclosure, etc. According to officials, this vulnerability mainly affects the following versions of apche Dubbo:
- 2.7.0<= Apache Dubbo <= 184.108.40.206
- 2.6.0<= Apache Dubbo <= 2.6.7
- Apache Dubbo = 2.5.x
Official solution:High risk of mass upgrade
Apache Dubbo officially recommends that users upgrade their website to a secure version 2.7.5 to address this vulnerability. However, this method is more radical, and the risk of upgrading in large quantities is relatively high, mainly due to the following two reasons:
First, Dubbo has a long history, and the version upgrade is often not smooth.
Many customers have been using Dubbo for a long time. Dubbo stopped maintenance once in 2014. Many of them switched to Dubbo launched by Dangdang. Dubbo was maintained again in 2017. These historical versions and branches involve many dependency packages and middleware. From a practical point of view, upgrading is not as simple as replacing a jar package. There may be a large number of incompatibilities, which need to be checked step by step to be upgraded successfully.
Second, Dubbo has a large number of services and a large amount of work for batch upgrading.
Early customers who used Dubbo, after a long time of micro service service, now have a large number of services, including core services. If only upgrading the part, there may be incompatibility, batch upgrading, too much workload, and involving core business can cause service outage risk, which is very difficult for customers to accept.
Light boat solution: triple mechanism to avoid loophole risk
Lightboat microservice is a one-stop cloud native service platform based on open-source technology stack and supporting flexible it architectures such as private cloud or public cloud. It consists of eight components: microservice framework NSF, Application Performance Management APM, container platform NCS, distributed transaction gtxs, API gateway, PAAS middleware, business monitoring, and continuous integration pipeline Ci / CD.
Among them, NSF is closely related to Dubbo.
When many customers use the open-source microservice framework, they mostly use the left side of the figure above. All the functions of service management and governance need to be developed by themselves, with a large amount of development outside the business code. Moreover, Dubbo and spring cloud have a lot of configurations. These configurations and code logic are scattered in various projects. Every time a function is added, all businesses need to be re released and launched.
The lightboat microservice framework NSF adopts full stack bytecode enhancement technology, zero intrusion, zero performance loss, and nano management business system. Service management and function are all concentrated in agent, development only needs to pay attention to business code, and they are separated at code level. And provide a unified visual dynamic governance interface, independent allocation of development and operation and maintenance responsibilities, and real-time effective update.
NSF is based on the native spring boot technology. It can be compatible with the application systems of the mainstream frameworks such as nanotube, Dubbo, grpc, istio, etc. it also provides an easy-to-use, visual, secure and human-oriented visual interface.
How can NSF solve the problem of Dubbo’s vulnerability?
First, NSF provides external services and authentication mechanism between servicesYou can flexibly configure whether to turn on the authentication and authentication switch for a service. Once it is turned on, the client request is intercepted by the agent and will go to NSF for authentication and authentication. Only when it passes, can it be released to let the request arrive at the service end. For some core businesses, you can turn on the authentication and authentication mechanism and only let the trusted service call, which can greatly reduce the risk.
Second, NSF can flexibly configure the black and white list of calls between services, you can use either the service name or the IP address segment. On the one hand, only trusted services and IP segments can access some services. On the other hand, once risks are found, they can be configured on the interface in time, and the agent will immediately issue policies to intercept risks in time.
Third, in the long run, Dubbo should upgrade its bugs. Upgrading requires a certain strategy. It can be upgraded layer by layer and in batches according to the dependency between services, so as to control risks.Using NSF, you can see the dynamic dependency between services on the interfaceAccording to this dependency key, you can upgrade step by step.
Best practice advice: in the face of loopholes, risks can be controlled
Light boat is a full stack solution for microservice architecture. In addition to its products, Netease also summed up a full range of best practices in the process of its business implementation, which can provide consulting services. In response to the problem of Dubbo’s vulnerability, the light boat micro service team gives the following best practice suggestions. Even if Dubbo has a vulnerability problem, it can control the risk in a timely manner.
First, the container layer:In the container layer best practice, there are two very important ones. One is that the container image should be as small as possible and unnecessary tools should be deleted. Even if the Dubbo service deployed in the container is intruded, there is no command to execute inside the container. Second, do not use privileged to run the container, so that even if the container is intruded, it cannot escape to the host.
Second, kubernetes layer:For kubernetes, it is not recommended that all pods default to interworking, but to enable the appropriate network policy for isolation.
Third, micro service layer:The upgrade of Dubbo service is really a headache, because there are many services and complex dependencies, but if you follow the following best practices, the upgrade will be easier.
One of the specifications: only one-way call, circular call is strictly prohibited.
After the micro service is split, the dependency between services is complex. If it is recycled, it will be a headache when upgrading. I don’t know which one should be upgraded first and which one should be upgraded later, which is difficult to maintain.
Therefore, the call between levels is specified as follows:
- The basic service layer is mainly used for database operation and some simple business logic, and it is not allowed to call any other services;
- The composite service layer can call the basic service layer to complete complex business logic. The composite service layer can be called. Circular call is not allowed. The controller layer service is not allowed to be called;
- Controller layer, which can call composite business layer services and is not allowed to be called by other services.
If there is a circular call, for example, a calls B, and B also calls a, it is divided into two layers: controller layer and composite service layer. A calls the lower layer of B, and B calls the lower layer of A. You can also use message queuing to change synchronous calls to asynchronous calls.
Specification 2: interface data definition must not be embedded or transmitted through
Microservice interfaces often transmit data through data structure. If the data structure is transparent, the same data structure is used from the bottom to the top, or the upper data structure embeds the lower data structure. When a field is added or deleted in the data structure, the impact will be very large.
Therefore, the definition of interface data should be agreed between every two interfaces. It is strictly forbidden to embed or transmit through. Even if it is almost the same, it should be redefined. In this way, the change of interface data definition will affect only the caller and the callee. When the interface needs to be updated, it is controllable and easy to upgrade.