It’s time to say goodbye to docker

Time:2022-5-6

In the ancient times of container (about 4 years ago), docker was the only participant in container games. But now the situation is different. Docker is no longer the only one, but just another container engine. Docker allows us to build, run, pull, upload and view container images, but there are other tools that can do better than docker for each task. So let’s take a look at the current situation, uninstall (only possible) and forget all the information of docker.

However, why not use docker?

If you are an old docker, I think you need some reasons to convince yourself even if you consider using different tools. So here’s the reason:

First of all, docker is a tool to try to do everything. Generally speaking, this is not the best way. In most cases, it’s best to choose a tool that only does one thing and can do it very well.

If you’re worried about having to learn to use different cli, different APIs or usually different concepts after switching to different tools, this won’t be a problem. Choosing any tool in the next article is completely seamless because they (including docker) follow the same specifications of OCI (open container initiative). This specification includes the runtime, distributed and image of the container, and covers all the features required by the container.

Due to the existence of OCI, you can choose a tool set that is most suitable for you. At the same time, you can still use the same API and cli commands, just like docker.

So, if you want to try new tools, let’s compare the advantages, disadvantages and features of docker and its competitors to see if it’s necessary to consider giving up docker and using some new blind tools.

Container engine

When comparing docker with other blind tools, we need to decompose it into components. The first thing we want to discuss is the container engine. Container engine is a tool that can provide operation image and container user interface. With it, you don’t need to deal with a series of things such as seccomp mechanism or SELinux policy. Its work also includes pulling the image from the remote warehouse and expanding it to the hard disk. It looks like it’s also running containers, but in fact its job is to create container lists and directories with a mirror layer. Then pass them to the container runtime, like runc or crun (discussed later).

There are many container engines available, but the most prominent among docker’s many competitions is podman developed by red hat. Unlike docker, podman does not need a daemon to run and does not need root permission, which has been a concern of docker for a long time. Podman can run not only containers, but also pods. If you are not familiar with the concept of pod, pod is the smallest deployable computing unit in kubernetes. It consists of one or more containers to perform tasks. This makes it easier for podman users to migrate jobs to kubernetes. Therefore, as a simple demonstration, the next step is if you run two containers in a Pod:

~ $ podman pod create --name mypod
~ $ podman pod list

POD ID         NAME    STATUS    CREATED         # OF CONTAINERS   INFRA ID
211eaecd307b   mypod   Running   2 minutes ago   1                 a901868616a5

~ $ podman run -d --pod mypod nginx  # First container
~ $ podman run -d --pod mypod nginx  # Second container
~ $ podman ps -a --pod

CONTAINER ID  IMAGE                           COMMAND               CREATED        STATUS            PORTS  NAMES               POD           POD NAME
3b27d9eaa35c  docker.io/library/nginx:latest  nginx -g daemon o...  2 seconds ago  Up 1 second ago          brave_ritchie       211eaecd307b  mypod
d638ac011412  docker.io/library/nginx:latest  nginx -g daemon o...  5 minutes ago  Up 5 minutes ago         cool_albattani      211eaecd307b  mypod
a901868616a5  k8s.gcr.io/pause:3.2                                  6 minutes ago  Up 5 minutes ago         211eaecd307b-infra  211eaecd307b  mypod

Finally, podman provides exactly the same cli commands as docker. You only need to rename docker to podman.

Besides docker and podman, there are other container engines, but I think they are dead ends or not suitable for local development. But to have a complete picture, we should at least mention what else:

  1. LXD — LXD is the container Manager (daemon) of LxC (Linux container). The tool provides the ability to run system containers, which provide a container environment similar to VM. It is located in a very narrow space without many users, so unless there are specific use cases, you’d better use docker or podman.
  2. Cri-o – when you query what cri-o is through Google, you may find that it is described as a container engine. However, it is actually the container runtime. It is built specifically for the kubernetes runtime, not the terminal used by users.
  3. RKT – RKT is a container engine developed by coreos. The project mentioned here is only for the integrity of the picture, because the project has ended and the development has stopped, so it should not be used.

Build image

Docker has only one choice for the container engine. But when building images, we have many choices.

First, let’s introduce buildah. This is another tool developed by red hat, which is better matched with the combination of podman. If you have installed podman, you may notice that podman’s build command is actually a disguised buildah, because its binaries are contained in podman.

As its feature, it has the same route as Padman – it does not need daemon and root permission, and generates OCI image, so it ensures that your image can run in the same way as docker. It can also build images from dockerfile or container file, which are the same thing, but have different names. In addition, buildah provides better control over the mirror layer, allowing you to commit changes in a single layer. However, the difference from docker is that the images built by buildah belong to specific users, so you can only list the images built by yourself.

Now, considering that buildah is already included in podman cli, you may ask, why use a separate buildah cli? Buildah cli is a superset of commands contained in podman, so you may not need to touch buildah cli. By using it, you may find some additional features (for details on the differences between podman and buildah, see the article below).

To sum up, let’s look at a demonstration:

~ $ buildah bud -f Dockerfile .

~ $ buildah from alpine:latest  # Create starting container - equivalent to "FROM alpine:latest"
Getting image source signatures
Copying blob df20fa9351a1 done  Copying config a24bb40132 done  Writing manifest to image destination
Storing signatures
alpine-working-container  # Name of the temporary container
~ $ buildah run alpine-working-container -- apk add --update --no-cache python3  # equivalent to "RUN apk add --update --no-cache python3"
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
...

~ $ buildah commit alpine-working-container my-final-image  # Create final image
Getting image source signatures
Copying blob 50644c29ef5a skipped: already exists  
Copying blob 362b9ae56246 done  Copying config 1ff90ec2e2 done  Writing manifest to image destination
Storing signatures
1ff90ec2e26e7c0a6b45b2c62901956d0eda138fa6093d8cbb29a88f6b95124c

~ # buildah images
REPOSITORY               TAG     IMAGE ID      CREATED         SIZE
localhost/my-final-image latest  1ff90ec2e26e  22 seconds ago  51.4 MB

From the above script, you can see that it is very simple to use buildah , bud to build an image. Bud means to use dockerfile to build, but you can also use buildahs , from, run and copy in the form of script. These commands are equivalent to those in dockerfile.

The next one to introduce is Google’s kaniko. Kaniko also builds images from dockerfile. Similar to buildah, it does not need a daemon. The main difference between kaniko and buildah is that kaniko focuses more on building images in kubernetes.

Kaniko means by using GCR IO / kaniko project / executor can be run as an image, which is meaningful for kubernetes, but it is not convenient for local construction, and it is against the purpose to some extent, because you need to use docker to run kaniko image to build your image. This means that if you are looking for tools to build images in the kubernetes cluster, kaniko may be a good choice because it does not require daemons and is more secure.

From my personal experience – I used kaniko and buildah to build images in kubernetes / openshift cluster, which can do a good job, but when using kaniko, I encountered some random build crashes and failures when importing images into registry.

The third competitor is buildkit, also known as the next generation docker. It is part of the Moby project and can use docker_ Buildkit = 1 enable docker. So what exactly will this bring? It introduces many improved and awesome features, including parallel build steps, skipping unused phases, better incremental builds, and rootless builds. On the other hand, it still needs to run daemons. Therefore, if you don’t want to get rid of docker, but want some new features and better improvements, using buildkit may be the best choice.

Here we also have some specific use cases worth mentioning, but they are not my first choice:

  1. Source to image (S2i) is a toolkit that builds images directly from source code without using dockerfile. This tool works well in simple, expected scenarios and workflows, but if you need a lot of customization, or the project doesn’t have the expected layout, it will soon become annoying and clumsy. If you are not very confident in docker or build an image on the openshift cluster, you may consider using S2i, because building with S2i is a built-in feature.
  2. Jib is another tool developed by Google for building Java images. It includes Maven and gradle plug-ins that make it easy to build images without interfering with the dockerfile.
  3. Last but not least, bazel is another tool of Google. It is not only used to build container images, but also a complete building system. If you just want to build a mirror, studying bazel may be a little too much, but it’s definitely a good learning experience, so if you like, rules_ The docker section is a good starting point.

Container runtime

The last problem is that when the container is running, it is responsible for running the container. Container runtime is a part of the whole container life cycle. Unless you have very specific requirements for Su speed and safety, you may not modify it. So if you’re tired of reading this, you can skip this part. On the other hand, if you just want to know what options are available, the details are as follows:

Runc is the most popular container runtime created based on the OCI container runtime specification. Docker (via containerd), podman and cri-o are all using it, so almost everything wants to use LXD. There are not many things that can be added. It is the default value of everything, so even if you give up docker after reading this article, you are likely to still use runc.

Another similar method of runc is crun. This is a tool developed by red hat and written entirely in C (runc is written in go). This makes it faster and more memory efficient than runc. Considering that it is also OCI compatible runtime, you should be able to easily switch to it if you want to self check. Although it is not very popular yet, in the preview version, it will be used as a RHEL 8.3 version instead of OCI runtime. Considering that it is a product of red hat, we may eventually see it as the default version of podman or cri-o.

Speaking of cri-o. As I said earlier, cri-o is not a real container engine, but a container runtime. This is because cri-o does not include features like push mirroring, which is exactly what you expect from the container engine. As a run-time cri-o, run the container internally using runc. You should not try to use this runtime on the machine because it is built for the runtime on the kubernetes node. You can see that it is described as “all the runtime kubernetes needs, that’s all”. Therefore, unless you’re setting up a kubernetes cluster (or openshift cluster), you probably shouldn’t touch this.

The last content of this section is containerd, which is a graduation project of CNCF. It is a daemon that acts as an API for various container runtimes and operating systems. In the background, it relies on runc, which is the default runtime of docker engine. It is also used by Google kubernetes engine (gke) and IBM kubernetes service (IKs). It is an implementation of the kubernetes container runtime interface (the same as cri-o), so it is a good candidate for the kubernetes cluster runtime.

Image detection and distributed

The last part of the container stack is image detection and distribution. This effectively replaces docker checking and increases the ability to copy / mirror images between remote registries.

The only tool I want to mention here that can accomplish these tasks is skopeo. It is produced by red hat and is a supporting tool for buildah, podman and cri-o. In addition to the basic skopeo checks we all know from docker, skopeo can also copy images using skopeo copy, which allows you to mirror images between remote registries without first pulling them to the local registry. If you use local registry, this function can also be used as pull / download.

In addition, I would like to mention dive, which is a tool for checking, exploring and analyzing images. It is more user-friendly and provides more readable output. It can dig deeper into your image, analyze and measure its efficiency. It is also suitable for use in CI pipes. It can measure whether your image is “efficient enough” or, in other words, whether it wastes too much space.

conclusion

The purpose of this article is not to persuade you to abandon docker completely, but to show you the whole scenario and all options for building, running, managing and distributing containers and their images. Every tool, including docker, has its advantages and disadvantages. It is very important to evaluate which group of tools is most suitable for your workflow and use cases. I hope this article can help you in this regard.

 Welcome to my official account. If you have any foreign technical articles you like, you can leave a message and recommend them to me through the official account.
It's time to say goodbye to docker

Recommended Today

Let the little friends shout fun visual works!

Today, I’d like to share with you a wave of excellent cases of visualization.“Visualization” is to make the things we can’t see clearly visible and visualize the abstract things.Using visualization tools, give full play to the ability of visual display and three-dimensional spatial analysis, and integrate business management, Internet of things perception data, big data […]