Quickly master how to use docker to build a development environment

Time:2022-1-11

With the continuous growth of the platform, the R & D of the project is increasingly dependent on various external environments for developers, especially on basic services. These phenomena lead developers to directly use public basic components for collaborative development for simplicity. In the case of parallel development, especially for database changes or data changes, other developers often increase the troubleshooting time, resulting in the reduction of overall development efficiency, At the same time, it also poses a huge obstacle to remote assistance. In order to solve the above problems, docker compose technology will be used to assist developers in building the development environment. Finally, developers can build the whole development environment as long as docker is installed.

Knowing a thing is completely different from realizing it. Since the day docker was born, we have dreamed of “deploying a project in 15 seconds”, “version controllable development environment”, and fashionable operation and maintenance terms, such as “rolling development” and “software defined architecture”. People in the industry at the forefront of the wave are participating in the tide of defining, redefining and commercializing many terms and tools, such as “choreography”, “service discovery”, etc.

I think the catalyst for this tide comes from the wonderful interface and abstraction between applications and infrastructure brought by docker. Developers can talk about infrastructure without knowing the underlying architecture, and operators don’t have to spend a lot of time studying how to install and manage software. There must be some power hidden under the seemingly simple appearance to make everyone’s life simpler and more efficient.

The real world is cruel. Don’t take it for granted that adopting a new technology will only bring enjoyment. In the past few years, I have been honed by some projects and experienced strange environments. I think docker is no exception. However, a certain experience can generally be directly applied to the next stage of the project. If you want to gain skills from docker, you must immerse yourself in the actual project.

In the past year, I devoted myself to teaching my book on the basics of dokcer, docker in action.

I noticed that when almost everyone starts learning docker technology, they will struggle with how to create a development environment, and then they can understand the relationship between everyone in the ecosystem. Everyone will think that using docker will make the environment easier to build at first, which is not completely wrong. Many “containerization” tutorials cover creating an image and how to package a tool into a container, but how to docker the development environment is a completely different thing.

As a pioneer, I can share my experience.

I used to be a senior java user, but this shared experience is not about Java, but around me developing applications using go and node. I have some go development experience and actively improve my ability in this field. The main problem when I enter an unfamiliar field and get started quickly is how to obtain the correct workflow, and I also hate constantly installing software on my laptop, which drives me to try to do these jobs with docker, or sometimes use vagrant.

The project I participated in is to write a standard rest service with go, which is based on gin and depends on some libraries and services of redis and NSQ. In other words, I need to import some locally running redis and NSQ instance libraries. What’s more interesting is that I also use some static resources serving nginx.

For laymen, go is a programming language. In fact, there is also a command-line tool called “go”. It is used for everything from dependency management, compilation, test cases to other tasks. For the go project, in addition to GIT and an easy-to-use editor, the rest is to deal with it. However, there is still a problem. I don’t want to install go on my laptop. I just want to install GIT and docker on my laptop. These problems limit compatibility in other environments and lower the threshold for novices.

This project has runtime dependencies, which means that this toolset needs to include docker compose for simple environment definition and orchestration. Many people will feel uncomfortable about it, so what should we do? Start creating a dockerfile or docker compose yml? Well, let me tell you what I did, and then explain why.

In this case, for example, www.sangpi COM, I want my local package to be fully automatic. I don’t like to go step by step manually, and my VIM configuration file is simple. I just want to control the running environment from the “run or not” level. Localization development environment objectives are copied quickly, not only to improve productivity, but also to share docker images. I finally completed the dockerfile to generate images including go, node, and gulp, the packaging tool I most often use. This dockerfile does not embed code, and the image does not embed gulpfile. Instead, a volume is defined on an established gopath (the root path of the go workspace).

Finally, I set the entry point for gulp for this images, and set the default command to monitor. Output images is definitely not what I call build artifact. In this sense, the only thing this environment does is provide a running instance to help us judge whether the code runs. It worked very well for my scenario. I use “artifacts” to refer to another build.

Next, I use compose to define the local development environment. First, define all dependent services defined in docker hub used in images, and connect them to a “target” service. This service refers to where the new dockerfile is generated, binds the local source directory to the mount point where the new image is expected to be output, and exposes some ports that can be tested. Then, a service is added to continuously initiate a series of integration tests to the target service cycle. Finally, I added the nginx service and mounted volumes with many configuration files and static assets. The benefit of using volumes is to reuse profiles and assets without rebuilding images.

All codes will eventually generate a local development environment on the computer. When using:


docker-compose up –d

When, GIT clone will be started and run circularly; There is no need to rebuild the image or restart the container. Whenever When the go file changes, gulp will rebuild and restart my service in the running container. It’s that simple.

Is it easy to create this environment? Not necessarily, but it did. Isn’t it easier to install go, node and gulp locally without a container? It may be in this scenario, but it is limited to running this dependent service with docker. I don’t like that.

I used to manage different versions of these tools, which generated complex environment variables and generated artifacts everywhere. I have to remind my colleagues of these conflict prone environment variables. They lack centralized version control.

Maybe you don’t like the environment described above, or you have different needs for the project. Well, that’s true. This article doesn’t let all tools run in docker. If so, it means that we haven’t considered what problems to solve.

When I designed this environment, I considered the following questions, concerns, and some potential answers. When you start the docker work environment, you will find that the actual situation may be worse than your answer.

When you think about packaging and environment, what are the first factors to consider?

This is indeed the most important issue. In this scenario, there are several options. I can use go to program directly in the container, which looks like the following:

In fact, most of the bolierplate in this example can be hidden through shell aliases or functions. It feels like go is installed in its own device. You can also contact the go workflow to create artifacts. These features are beneficial for non service projects, but not necessarily for libraries and software projects.

Assuming you are already using gulp, make, ant or other scripts, you can continue and use dokcer as the target of these tools.

In another way, I can define and control my build by using dockerbuild to gain more docker oriented experience. The code is as follows:

Using dokcer to control build has several benefits. You can use the previously compiled image. Dockerfilebuilds uses the caching method, so that the compilation work only repeats the smallest steps (assuming there is a great dockerfile). Finally, the images generated by these builds can also be shared with other developers.

In this case, I use the onbuildfimage in the golang repository as the base. This includes some great download dependency package logic. This method generates dockerimages that can be easily used in other non production environments. The problem with this method for production level images is that there must be steps to avoid large images and include some initialization scripts to verify the status before starting and monitoring services.

Interestingly, docker uses a series of scripts, makefiles and dockerfiles. The build system is relatively robust and responsible for all kinds ofgameTesting, linting, etc., as well as artifacts of various operating systems and architectures. In this scenario, the container is a tool for generating binary, but it is implemented from a local build image.

Expand the option of docker build. You can use compose to define a complete set of development environment.

Compose is responsible for environmental management. If you think the system is very clean, it’s not surprising that compose connects everything, optimizes volume management, automatically builds when images are missing, and summarizes log output. I chose these switches to simplify Service Dependencies and because they can generate the artifacts i need.

This example is a runtime container. Compose or docker have appropriate tools to do this. In this scenario, you may need a distributed image, or you may want build to generate a binary file for the machine.

If you want to get the desired image, you must ensure that the source code or precompiled library is embedded in the image during build. The volume is not mounted during build, that is, the image needs to be rebuilt every time it is repeated.

If you want to generate some artifacts inside the container, you need to introduce mount volumes. It can be easily implemented using the docker command line or the compose environment. Note, however, that build does not work unless the container is running, which means that docker build alone cannot be used.

Summary

Currently, there is no docker method to create a development environment. Docker is a choreographable tool, not just a holy book. Instead of using others’ existing docker build system, it’s better to spend some time learning this tool, clarify your own needs, and then create your own docker environment.

This is the end of this article about quickly mastering the use of docker to build a development environment. For more information about building a development environment with docker, please search the previous articles of developeppaer or continue to browse the relevant articles below. I hope you will support developeppaer in the future!