Using k8s to build front end test environment – create CI / CD

Time:2020-8-1

Each task of CI / CD is an independent container and does not interfere with each other. We can create the following task phases:

  • Project build phase, install dependencies, and build the project
  • In the container construction phase, the built project is packaged into a docker image
  • Deploy k8s environment
  • Deploy the final environment

As mentioned above, CI / CD is executed through an independent container. We can’t copy the CI / CD files to the host directly, so we need to build an image in the packaging phase. Secondly, we can’t directly create pods and services for the k8s cluster of the host machine in the CI / CD container. Here, we need to create an additional one that can be used in the host Create images of pod and service on the machine.

Create the host executors container

First of all, kubectl can operate the k8s cluster according to the configuration and certificate. When we start a cluster through minikube, we have done these things automatically. By default, we operate the k8s cluster under the current server through kubectl. If we get the certificate and information of the remote k8s cluster, we can also operate the remote k8s cluster locally K8s cluster for operation.

The container we created contains the information of the k8s cluster. This container can be used to create pods and services for the host during CI / CD.

Let’s write dockerfile:

FROM alpine:latest

RUN mkdir -p /root/.minikube

COPY kubectl /usr/bin/
COPY kube/config /root/.kube/
COPY minikube/* /root/.minikube/

We copy the kubectl command into the container for easy use. At the same time, we can copy the command under ~ /. Kube / config and ~ /. Minicube /ca.crtclient.crtclient.keyThe file is copied to the container.

Finally, use the commanddocker build -t ${imagename}To package the image.

For the built docker image, if you don’t want to publish it directly to the Internet, you can build your own docker warehouse. Here, we recommend using harbor,harborThere are no holes in the installation. According to the official documents, the installation can be carried out smoothlyharborThe use of.

After the image is made, we need to push it to the private warehouse. Before pushing, we need to tag the image.

To push toharborThen, tag should follow the rules of server / group / imagename, such as oursharborThe visiting address isharbor.mydocker.com, project group isfrontendThen the label should read:harbor.mydocker.com/frontend/imagename

Then we log in to docker:

$ docker login harbor.mydocker.com

>Enter the user name and password
>Display success

$ docker push harbor.mydocker.com/frontend/imagename

Finally, after the image push is successful, we should be able toharborView the mirror in

Configure CI / CD scripts

The configuration of gitlab CI is not described in detail here. The introduction of official documents is very detailed.

Let’s try to do demo configuration for the five stages we described at the beginning:

#Here is our host executive image
image: harbor.com/frontend/gaia_builder:0.0.1

#Corresponding to the four stages mentioned at the beginning
stages:
  - build_project
  - build_docker
  - deploy_k8s
  - deploy_server

build_project:
  #The environment and tools we use to build the project
  #For example, nodejs version, yarn and other tools are provided by this image
  #The specific method of making this image will be discussed later
  image: harbor.com/frontend/frontend_toolkit:0.0.1
  #When the execution time is changed to manual, we don't want to automatically build an environment every time we commit
  when: manual
  stage: build_project
  #Pass the packaged resource as an attachment to the next stage
  artifacts:
    paths:
      - dist
  #Want to use runner's tag
  tags:
    - fek8s
  script:
    #Execute the specific build script and adjust it according to the actual situation
    #Eventually we'll delete the node_ Modules to reduce the volume of our service image
    - yarn install --forzen-lockfile
    - yarn build
    - rm -rf node_modules/

build_docker:
  image: docker:latest
  when: manual
  #At the beginning, the runner we configured is executed under the docker, so when we want to package the docker image, it is equivalent to executing the docker under the docker
  #The following configuration can be understood as standard
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ''
    DOCKER_HOST: tcp://127.0.0.1:2375
    #Export some general environment variables to facilitate subsequent reuse
    DOCKER_IMAGE_NAME: $GITLAB_USER_LOGIN-$CI_BUILD_REF_NAME
    DOCKER_IMAGE_TAG: harbor.com/myproject/$GITLAB_USER_LOGIN-$CI_BUILD_REF_NAME:latest
  services:
    #Dind is the abbreviation of docker in docker
    #The code of our private warehouse is legal as specified by the -- execute registry parameter
    - name: docker:dind
      command: ["--insecure-registry=harbor.com"]
  stage: build_docker
  dependencies:
    - build_project
  tags:
    - fek8s
  script:
    #As will be mentioned later, this image is the basic image of the service
    - export CACHE_IMAGE=harbor.com/frontend/gaia_nginx:latest
    #Pull the image and suppress the CI / CD execution failure due to stderr output
    - docker pull $CACHE_IMAGE || true
    #Building service image
    - docker build --cache-from $CACHE_IMAGE -t $DOCKER_IMAGE_TAG -f gaia/Dockerfile .
    #Log in to the private docker service
    #Considering that different users have different login accounts and passwords, they are made into variables
    #It can be protected by adding variables to CI / CD in gitlab project settings
    - docker login -u $DOCKER_USERNAME -p $DOCKER_PWD harbor.com
    #Finally, the push service is mirrored to the server
    - docker push $DOCKER_IMAGE_TAG

deploy_k8s:
  when: manual
  stage: deploy_k8s
  tags:
    - fek8s
  script:
    #Since the service name cannot be underlined, and our branch name often uses an underline, we replace it with a horizontal line to prevent errors
    - export K8S_SVC_BRANCH=$(echo $CI_BUILD_REF_NAME | sed 's/_/-/g')
    #In order to reduce the cost of environment construction, the service name is fixed in the format of '${username} - ${branch} - SVC'
    - export SVC_NAME=$GITLAB_USER_LOGIN-$K8S_SVC_BRANCH-svc
    #Delete old service
    - kubectl delete svc $SVC_NAME --ignore-not-found --namespace=default
    #Delete old deployment
    - kubectl delete deploy $SVC_NAME --ignore-not-found --namespace=default
    #Create a new service
    - kubectl run $SVC_NAME --image=harbor.com/myproject/$GITLAB_USER_LOGIN-$CI_BUILD_REF_NAME:latest --namespace=default
    #Deploy service and expose port 80, which is convenient for k8s server to forward
    - kubectl expose deploy $SVC_NAME --port=80 --namespace=default

deploy_server:
  image: curlimages/curl:latest
  stage: deploy_server
  tags:
    - fek8s
  needs: ["deploy_k8s"]
  script:
    #Call the host (k8s server) service to overload nginx forwarding
    #This kind of service will be explained later
    - curl -X POST 10.10.10.10:3500/svc

Now that our CI script has been configured, let’s finish some of the basic images and services left over from this article.

Using k8s to build front end test environment
Using k8s to build front-end test environment – building k8s environment
Using k8s to build front end test environment gitlab integrated k8s
Using k8s to build front end test environment – create CI / CD
Using k8s to build front-end test environment – Basic Service Construction
Using k8s to build front end test environment – Summary