Everyone loves kubernetes. Doesn’t docker smell good?

Time:2021-11-28

Opening

When it comes to docker, many people first think that it is a virtualization container, so it is particularly easy to fall into a misunderstanding that docker only adds another layer to the Linux operating system, just like running a VMware on the OS. Docker must be slow and complex. It doesn’t look as comfortable as a native installed Service.

In fact, this is a misunderstanding. Various services managed by docker are native processes of the operating system, not a virtualization product. Its correct definition is the application container engine.

How to understand this application container engine? Let’s talk about the core principle of docker – one of the main mechanisms realizes resource isolation through the namespace mechanism of Linux. This resource isolation includes:

  1. UTS, isolation of host name and domain name
  2. IPC, isolation of semaphores, message queues, and shared memory
  3. PID, isolation of process numbers
  4. Network to isolate network devices, network protocol stacks and network ports
  5. Mount, isolation of mount point (file system)
  6. User to isolate users and user groups.

These isolation mechanisms are implemented by the namespace mechanism of the Linux kernel and the essence of docker container design.

It’s like a 300 square meter house with a family living in it. The bedroom, kitchen and bathroom are exclusive to the family. But the house is too big to live in three families, which can not only share part of the cost, but also bring additional benefits to the owner. Then it is necessary to re plan and design this big house to meet the needs of three families and formulate some living systems. Some resources can be shared, but the key resources must be separated to protect privacy! In fact, in the final analysis, we still live equally in a big house.

Using this metaphor is actually to tell you that you understand docker as the planning and arrangement of a house and multiple families. Docker manages a lot of container services. Container services run on the host, such as mysql, nginx, microservices, etc. are container services. Everyone runs equally on one OS, but enters their own room, You know nothing about other people’s rooms. This not only protects the services from resource contention, but also allocates the capacity of CPU, memory and disk according to the pre entry protocol. In this way, it is also clear that everyone lives together, and no one can take advantage of anyone. Of course, external network ports still need to be allocated differently.

With this ability, you can run many services on limited cloud resources! My own company website runs on Alibaba cloud’s ECS centos7 and runs three docker containers: nginx, MySQL and WordPress. I only allocated 512M memory, which is stingy enough, but it runs properly. The only thing is that the physical memory is too small. Sometimes when the service is restarted, the storage resources in the OS report are not enough. I must restart the docker and empty the memory.

I used three ECS servers with good performance, 4 cores and 16g memory for the two Internet platform products of my old owner at that time, and ran more than 50 micro services and other basic services. I really squeezed the resources clean. There are also service log isolation, environment variable isolation, global configuration isolation and so on. There are too many benefits. The key provides good basic support for Devops of our products on the Internet architecture. I can run two sets of microservices on a virtual machine, one for production and one for testing. The tested microservices will be upgraded to new production microservices, and the old microservices will enter the transition and replacement period. See:Build Devops application architecture of internet medical platform

Problem collection

1. Does docker have to be networked?

Connecting to the Internet is not necessary. Docker registry service can be built internally. I used to build a registry service on one of Alibaba cloud’s multiple machines, and then let other machines access it through the 5000 port of the intranet. Remember to configure the docker service of each server without going through SSL.

Only when we publish remotely can we let our development client access the HTTP SSL port of nginx and reverse proxy to the registry warehouse. Then we need to install a docker version of nginx on the server, because HTTPS is the best way to use the public network. As shown in the figure below:

Everyone loves kubernetes. Doesn't docker smell good?

For the problem of how to install dockers in the intranet, you need to find a machine that can access the Internet first, and make your own docker images through the docker hub. You’d better know how to do the dockefile. This is a script technology. There are some localization and parameter optimization. You need to do the docker images again, and then push the docker to the private register service warehouse in the intranet, Other intranet machines can use your docker images as long as the docker pull command.

When developing under Windows system, you can download and install docker desktop of windows, which is the same as Mac version.

2. How does the program in docker realize hot update?

Build an image with dockerfile and generate a container to run the program. Now the program code has changed. How to implement hot update?

Generally, docker is updated by pulling down new images and then restarting the container. Of course, there are some ingenious methods. By mirroring the publisher directory in docker to the local directory, only upload the package each time, and then restart docker after updating the local directory of the server. This method eliminates the problem of too large docker volume and slow upload.

However, these need to restart the container, which is not a real hot update. The jitter time allowed by online business systems is often very strict. Moreover, even if the docker image has been tested in the test environment, it cannot be directly replaced on the production server without verification. If it is a web system, I suggest using API gateway + docker compose + multi version operation to realize hot update, and this will also minimize the impact of upgrade jitter. As shown in the following figure: t refers to test, P refers to production, V1 and V2 refer to the multi version release of docker compose. Redirect the new version jump of microservice through API gateway to minimize upgrade jitter.

Everyone loves kubernetes. Doesn't docker smell good?

Specifically, let me briefly say that docker compose is the whole of your application release, so that whether it is a micro service or a single application, it will be uniformly deployed and managed as a unit!

Then, release a new version of docker compose for the new version of programs that need to be updated. After QA verification, the API gateway will realize dynamic switching. This is the general idea of docker hot update.

3. When docker mounts the data volume, the mapping file will be out of sync?

When learning docker, I found that redis.conf is mapped. Do you have to copy a copy from the container before running YML

When docker maps configuration files, it must have this file first. Remember, it will not create this file internally, otherwise it will only create directories.

Therefore, if the docker mapping file does not exist, go to the mapping directory.

So the first way is that the configuration file already exists, that is, upload it to the server user-defined configuration directory, and then the docker directly maps it. At this time, it is reasonable to map the file or directory.

The second method: the configuration directory is mapped to the user-defined directory, and the script is written automatically during the container running process. Then, all files of the configuration directory need to be written to the packaged configuration file in the image by the script during the container initialization process.

In fact, I have done a redis optimized production level docker, which is the first way. You need to upload the configuration yourself. You can see my gitee source code warehouse. The gitee warehouse has also made this dockerfile and configuration file for redis culture and performance parameters· Search “read bytes” in gitee. There is a stand-alone redis dockerfile file I wrote well, which has optimized the performance and can be used for learning.

Conclusion

In short, in the current era of kubernetes, we sometimes have to think calmly. Is it necessary for us to make it so complicated? Isn’t docker enough? If you cooperate with portal, an online free docker management tool, you can manage your cloud service group well.

However, there is a threshold, that is, when playing docker, you must constantly experience your Linux ability, including scripting ability, because some docker images can not meet your use needs according to your actual environment and need to build your own dockerfile. In my previous project, I cooperated with Maven to write my own dockerfile and shell scripts, and microservices from packaging to publishing, It is done at one go, and you can flexibly choose which micro service to update. This really requires a deep understanding of Linux.

These are not enough to say that you can use docker. In fact, for the more complex k8s, you can’t get around the problem of being familiar with Linux. Therefore, it is still the key step. In the container era, to make good use of the container engine is to start with a simple docker, and experience your Linux skills outside the program, and your technology will improve faster!

 

 You can read another detailed article on distributed and big data technology: