Some time ago, I did a test
nodeFull stack project, server technology stack is
postgresql. Among them
centosIt takes a lot of trouble to build the environment and deploy the test server, and then deploy the production environment server when it goes online. There are a lot of “physical work” which is boring, energy consuming and thankless. So I began to think about how to automate the construction and deployment of this part, which led to the problem
What is docker
DockerIs lighter than the virtual machine virtualization technology, it virtualizes the entity is called the container. The container itself is an isolated scope
sandboxAt the same time, it only contains the basic library and its own services, which is very simple. After the container runs, it is only a process in the host computer, and the resource occupied is very small, which creates the conditions for the container cluster to run on the operating system, with excellent operability and flexibility.
What is the relationship between image and container? You can think of a mirror as a class（
class）The container is regarded as an object（
object）The container is generated by image instantiation. Of course, one image can generate multiple containers.
If not on the server, how do we use it on the client
DockerWhat about it? stay
OSXCan be used on
Docker Desktop, plus
KitematicThese two are desktop management tools, which are very convenient for routine operation.
KitematicOnly some operations are visualized, and the command line is necessary, because many operations can only be performed on the command line.
Docker basic operation
About image tags, such as
nginx:1.19.0-alpine, 1.19.0 is
nginxVersion number of,
alpineIt’s the code of OS.
Jessie: debian 8
Stretch: debian 9
Buster: debian 10
Alpine: alpine, recommended because it’s very small
AlpineIt’s the smallest version, some even a quarter of the others. This means that the build of mirrors is faster and more efficient, because fewer components are loaded and less vulnerabilities are invisible.
docker pull nginx:1.19.0-alpine
–Name web: specify the container name as web
-P 8080:80: the container nginx listens on port 80 and maps to local port 8080
-v xxxx:xxxx : here is to map the local configuration file to the container nginx configuration file
-d: Background operation
nginx:1.19.0-alpine : image used
docker run --name web -p 8080:80 -v /usr/etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx:1.19.0-alpine
Docker images display images Docker RMI XXX # delete image Docker PS # displays the running container Docker RM XXX # delete container
The most convenient way to build an image is to use it
Dockerfile, which is the configuration file of the image, as long as there is
DockerfileYou can build a mirror at any time. Here’s how to build a very simple
fromThis is the basic image used during construction
FROM nginx COPY nginx.conf /etc/nginx/nginx.conf
When our project not only has a single container, but also needs to run multiple containers, and the containers also need to communicate with each other, we need more powerful management tools. such as
k8sBut our current small projects use the official ones
First of all
docker-compose.ymlFor example, the following are the templates of the two containers,
imageRepresents the image used,
portsRepresents the port mapping,
volumesIs the data volume to be mapped:
version: "3" services: webapp: image: web ports: - "8080:80" volumes: - "/data" redis: image: "redis:alpine"
You can then use the following command line:
Docker compose build [options] [service...]? Build (rebuild) the service container in the project Docker compose up - D # run the compose project and execute it in the background
docker-compose upIs a very powerful command that will attempt to automatically complete a series of operations including building the image, (RE) creating the service, starting the service, and associating service related containers. All linked services will be started automatically unless they are already running. It can be said that most of the time, you can start a project directly through this command.
Building nginx node Postgres project
With the above foundation, we can then build our own projects, first of all
dockerfileThe main steps are as follows
- Create container working directory
- Copy related configuration files to container
- Installation in container
FROM node:14.5.0-alpine3.12 #Working directory WORKDIR /usr/src/app #Copy configuration file COPY package*.json ./ COPY process.yml ./ RUN npm set registry https://registry.npm.taobao.org/ \ && npm install pm2 -g \ && npm install #Management with PM2 CMD ["pm2-runtime", "process.yml", "--only", "app", "--env", "production"] EXPOSE 3010
- DB is configured with a database
postgresWhere data volume
volumesThe database directory and initialization steps are mapped
- The app is configured with
nodeServices, of which
buildIt’s a mapping
dockerfileThe directory where it is located;
depends_onIndicates the dependent container and start sequence. Here, start DB first and then start
linksRepresents mapping the name of DB to the app container
- Nginx container
depend_onIn the app container, configure forwarding at the same time
version: '3' services: db: image: postgres:12.3-alpine container_name: postgres environment: - TZ=Asia/Shanghai - POSTGRES_PASSWORD=xxxx volumes: - ./postgres/data:/var/lib/postgresql/data - ./postgres/init:/docker-entrypoint-initdb.d ports: - 5432:5432 Restart: always restart. In production environment, always is recommended expose: - 5432 app: image: koa-pg container_name: koa volumes: - ./dist:/usr/src/app/dist - ./logs:/usr/src/app/logs build: ./ environment: - TZ=Asia/Shanghai restart: always depends_on: - db links: - db expose: - 3010 nginx: image: nginx:1.19.0-alpine container_name: nginx volumes: - ./nginx.conf:/etc/nginx/nginx.conf ports: - 8080:80 environment: - TZ=Asia/Shanghai restart: always depends_on: - app Links: # host name instead of IP to configure nginx forwarding - app expose: - 8080
After configuring our project, it’s time to run it
This is true in our local development machine, and it is also true when deploying to servers. You can deploy as many servers as you want, as long as you install them
dockerIt’s something that can be solved by a command line.
To start several containers, modify the
docker-compose up，so easy !