Praise Containerization Practice

Time:2019-9-11

Preface

Containerization has become a trend, it can solve many problems in operation and maintenance, such as efficiency, cost, stability and so on. In the process of access to containers, there are often many problems and inconveniences. In the process of containerization, we have encountered various problems, such as container technology, operation and maintenance system adaptation, user usage habits change and so on. This paper mainly introduces the problems encountered in the process of containerization and the solutions adopted.

The original intention of praise containerization

At the same time, there will be many projects, daily parallel development, environment preemption seriously affects the efficiency of development, testing and online. We need to provide a daily development debugging and testing environment (qa) for each project, and with the project, daily life cycle project environment will also be created and destroyed. Our earliest containerization requirement was how to solve the problem of rapid environmental delivery.

[A favourable environment]
Praise Containerization Practice
Above is a general R&D process. In the standard process, we have four stable environments: Daily environment, Qa environment, pre-launch environment and test environment. Our development, testing and debugging work will not be carried out directly in a stable environment, but will pull out a set of independent project environment. As the code is developed, tested, pre-checked and finally released to the production environment, it will be synchronized back to the stable environment of Daily/Qa.

[Project environment]
Praise Containerization Practice

We provide a set of environment delivery schemes to meet the maximum project parallelism with the minimum resource input. On the basis of Daily/Qa stable environment, N project environments are isolated. In the project environment, only the computing resources involved and applied by the project need to be created. Other missing service calls are provided by stable environment, and in the project. In the environment, we use container technology extensively.

[Continuous delivery]
Praise Containerization Practice

Later, we realized the continuous delivery pipeline on the basis of the solution of rapid delivery in the project environment. At present, there are more than 600 projects/continuous delivery environments, plus Daily/Qa stable environment, involving 45,000 computing instances, which have very low CPU or memory utilization. Containerization can solve the efficiency problem of environmental delivery very well, and improve resource utilization to save cost.

A favourable containerization scheme

Our containerization scheme is based on kubernetes (1.7.10) and docker (1.12.6), docker (1.13.1). Here are the problems we encounter in various aspects and solutions.

network

The main backend is java application. With the customized Dubbo service scheme, the whole unit can not be fully containerized in the process. It is just needed to communicate with the existing cluster in the network routing. Because we can not solve the problem of overlay network and public cloud network interworking, we put it in the beginning. Abandon overlay network scheme and adopt macvlan scheme under hosted network, which not only solves the problem of network interoperability but also does not have network performance problems, but also does not enjoy the advantages of public cloud elastic resources. With the development of Cloud Architecture and more and more cloud vendors supporting container overlay network and VPC network, the problem of flexible resources has been alleviated.

Isolation

Container isolation mainly uses namespace and CGroup technology of the kernel. It has better performance in process, cpu, memory, IO and other resource isolation constraints. But there are many shortcomings compared with virtual machines in other aspects. The most common problems we encounter in the use process are the inaccurate number of CPUs and memory size in the container. Indeed, because the / proc filesystem cannot be isolated, the process in the container “sees” the number of CPUs of the physical machine and the size of memory.

Memory problem

Our Java application will decide how to configure the JVM parameters according to the memory size of the server. We use lxcfs to avoid it.
Praise Containerization Practice

The Problem of CPU Number

Because we have oversold demand and kubernetes defaults to using CPU share for CPU restrictions, although we use lxcfs, the number of CPUs is not accurate. JVM and many Java SDKs will decide how many threads to create according to the number of CPUs in the system. As a result, the number of threads and memory usage of Java applications are more than that of virtual machines, which seriously affects the operation. Other types of applications have similar problems.
We will build an environment variable NUM_CPUS according to the container specification, and then a nodejs application will create its worker process number according to this variable. In solving the problem of Java class application, we simply override the JVM_ActiveProcessorCount function by LD_PRELOAD, and let it return the value of NUM_CPUS directly.

Application access

Before containerization, all the apps in favor have been connected to the publishing system, and the packaging and publishing processes of the apps have been standardized in the publishing system, so the cost of application access is relatively small, and the business side does not need to provide Docker files.

  1. Noejs, python, php-soa and other supervisor-managed applications only need to provide app.yaml files in the GIT warehouse to define the runtime and start-up commands needed for running.Praise Containerization Practice
  2. Java standardized startup application business side does not need to change
  3. Java non-standardized applications need to be standardized

Mirror integration

Praise Containerization Practice
Container mirroring is divided into three layers: stack layer (os), runtime layer (language environment), application layer (business code and some auxiliary agents), and application and auxiliary agent are started by runit. Since our configuration is not completely separated, each environment is packaged independently in the application layer. In addition to the business code, we also put some auxiliary agents in the mirror according to the language type of the business. We also wanted to split various agents into multiple mirrors at first, and then run multiple containers per pod. Later, because we could not solve the problem of starting sequence of containers in pod, we threw all services into one container to run.

Praise Containerization Practice
Our container image integration process is also scheduled through kubernetes (which is scheduled to the designated packaging node). When the publishing task is initiated, the management and control system creates a packaged pod in the cluster. The packaged program compiles code, installs dependencies and generates Dockerifile according to the parameters of application type, etc. In this pod, docker in docker is used to integrate the container image and push it to the warehouse.
In order to speed up the packaging of applications, we use PVC to cache python’s virtualenv, node_modules of nodejs, Maven package of Java and other files. Additionally, in earlier versions of docker, the Docker file ADD instruction did not support specifying file ownership and grouping, which caused a problem that when we need to specify file ownership (our application runs on app account), we need to run RUN chown more than once, so that the image has a layer of data, so we call The docker version of the package node uses the official newer CE version because the new version supports the ADD – chown feature.

Ingress

Praise Containerization Practice
There are more perfect service-oriented and service mesh schemes for the internal invocation of the app. The access in the cluster need not be considered too much. Load balancing only needs to consider the HTTP traffic of users and system access. We have developed a unified access system by ourselves before containerization, so we do not have a containerized load balancing scheme. According to the mechanism of ingress, the controller is realized. The resource allocation of ingress is allocated in the unified access system. The upstream forwarded in the configuration will be associated with the service in kubernetes. We just make a sync program watch kube-api to update upstr in the unified access system in real time by sensing the change of service. Eam’s server list information.

Container login and debugging

Praise Containerization Practice
In the process of container access, the development feedback is that the console is difficult to use. Although we optimized many times, and the experience of iterm2 is still inadequate, we finally released the project / continuous delivery environment, which requires frequent landing and debugging of SSH landing authority.
Another serious problem is that when an application starts up, a health check problem will cause the pod to be rescheduled all the time, and the development process is sure to want to see the failure scene. We provide a debugging release mode, so that the container does not have a health check.

Journal

Praise Containerization Practice
There is a special log system. Our internal name is Skynet. Most of the logs and business monitoring data are typed directly into Skynet through sdk, so the standard output logs of containers are only used as a means to assist in problem investigation. Our container’s log collection is based on fluentd. After fluentd processing, it is typed to Kafka according to the log format agreed by Skynet. Finally, it is processed by Skynet and stored in es.

Grayscale release

We deal with the traffic of gray level publishing, which mainly includes three parts:

  1. User-side HTTP access traffic
  2. HTTP calls between applications
  3. Dubbo calls between applications

First, we uniformly label the various dimensions of gray scale (such as users, stores, etc.) on the unified access to the entrance, and then we need to modify the unified access, HTTP client and Dubbo client so that these labels can be transmitted throughout the call chain. When we publish the gray level of the container, we will deploy the gray level. Then we will configure the gray level rules in the unified access and gray level configuration center. The caller on the whole link will perceive these gray level rules to realize the gray level publication.

Containerization of Standard Environment

The Starting Point of Standard Environment

  1. Similar to project environments, more than half of daily, qa, pre, and prod servers running at low water levels in standard stable environments are wasteful.
  2. Because cost considerations daily, qa, prey are run on a single virtual machine, so once a stable environment needs to be released, it will cause standard stable environment and project environment to be temporarily unavailable.
  3. Virtual machine delivery speed is relatively slow, using virtual machine to do gray publishing is also more complex.
  4. Virtual machines often exist for several years or even longer, and the convergence of operating systems and basic software versions is very troublesome.

Advancement of Containerization of Standard Environment

After previous project/continuous delivery on-line and iterations, most applications themselves are already containerized. But for online, the whole operation and maintenance system is needed to adapt to containerization, such as monitoring, publishing, logging and so on. At present, our production environment containerization preparation is basically completed, production network has been partially applied to the front-end nodejs, and other applications are also being promoted, hoping to share more container experience in the production environment in the future.

Concluding remarks

Above is the application of container, and some problems and solutions encountered in the process of container. The container of our production environment is still in the initial stage. We will encounter various problems later. I hope we can learn from each other and share more experience with you later.

Reference

[1] https://github.com/fabianenar…

Praise Containerization Practice