Coding Devops + nginx ingress for automatic gray level Publishing

Time:2020-11-22

Author: Wang Wei, back-end development engineer of coding Devops, has many years of R & D experience, senior fans of cloud native, Devops and kubernetes, and member of Chinese community of servicemesher service grid. Kubernetes CKA, ckad certification.

preface

In kubernetes application to achieve gray level publishing, the simplest solution is to introduce officialNginx-ingressTo achieve.

We deploy two sets of deployment and services to represent the gray environment and production environment respectively. Through the load balancing algorithm, the two sets of environments are divided according to the gray scale, and then the gray level publishing is realized.

The common practice is to modify the new image after the project is packagedyamlMirror version of the file, executingkubectl applyTo update the service. If the publishing process still needs gray publishing, then the gray publishing can be controlled by adjusting the weight of the configuration files of the two sets of services, which can not be separated from manual implementation. If the number of projects is large and the time span of gray scale is too long, the probability of human error operation will be greatly increased, which depends too much on manual executionDevOpsEngineering practice is intolerable.

So, is there a way to achieve automatic grayscale without manual intervention? For example, after the code is updated, it is automatically published to the pre release and gray environment, and the gray scale is automatically increased from 10% to 100% in one day, and can be terminated at any time. After the gray level is passed, it is automatically released to the production environment?

The answer is yesCODING DevOpsCan meet such needs.

Architecture and principle of nginx ingress

Take a quick look backNginx-ingressThe architecture and implementation principle of

Nginx-ingress 架构

Nginx-ingressThrough prepositionalLoadbalancerTypeServiceReceive the cluster traffic and forward the traffic toNginx-ingressIn the pod, the configured policy is checked and then forwarded to the targetServiceAnd finally forward the traffic to the business container.

conventionalNginxWe need to configure itconfFile policy. butNginx-ingressBy implementingNginx-ingress-ControllerWill be nativeconfConfiguration files andyamlThe configuration file was transformed when we configuredyamlAfter the policy of the file,Nginx-ingress-ControllerIt will be transformed, and dynamic update strategy, dynamic reloadNginx PodTo realize automatic management.

thatNginx-ingress-ControllerHow to dynamically perceive the change of cluster strategy? There are many methods, which can be obtained dynamically by interacting with kubernetes API through webhook admission interceptor or through serviceaccount.Nginx-ingress-ControllerUse the latter to achieve. So it’s deployingNginx-ingressWe’ll find outDeploymentThe service account of pod is specified and rolebinding is implemented. Finally, pod can interact with kubernetes API.

Preview of implementation scheme

In order to achieve the above goal, we designed the following continuous deployment pipeline.

持续部署流水线案例

The continuous deployment pipeline mainly implements the following steps:

1. Auto deploy to pre release environment
2. Is a / B test performed
3. Automatic grayscale publishing (automatic 3 times to gradually increase the gray scale)
4. Publish to production

At the same time, this case demonstrates the steps from git submitting code to automatically triggering continuous integration

1. After the code is submitted, continuous integration is triggered and the image is automatically built
2. After the image construction is completed, the image is automatically pushed to the product library
3. Trigger continuous deployment

1. After the code is submitted, continuous integration is triggered, and the image is automatically built and pushed to the product library

2. Trigger continuous deployment and publish to pre release environment

3. Manual confirmation: carry out a / B test (or skip to the automatic gray level directly)

During a / B testing, only the header containing location = Shenzhen can access the new version, and other users accessing the production environment are still the old version.

4. Manual confirmation: whether to automatically publish the gray scale (automatically carry out 3 rounds to gradually increase the gray scale, and the interval between each round is 30s)

The first gray scale: the gray scale of the new version is 30%. At this time, about 30% of the traffic accessing the production environment enters the gray environment of the new version

After 30s, the second round of gray scale will be carried out automatically: 60% gray scale of new version:

After the third round, the gray scale of the new version is 90%

In this case, we have configured the automatic gray level publishing, which will be carried out gradually for three times, increasing the proportion by 30% each time, and then automatically enter the next gray level stage after each time lasting for 30 seconds. In different gray levels, it will be found that the probability of requesting a new version is higher and higher. Progressive grayscale can be configured arbitrarily according to business needs. For example, it can be automatically grayed 10 times in a day until it is released to the production environment without manual attendance.

5. The gray scale is completed and released to the production environment in 30s

Project source code and principle analysis

Project source code address: https://wangweicoding.coding.net/public/nginx-ingress-gray/nginx-ingress-gray/git

Continuous integration script
├── deployment
│   ├── canary
│   │   └──  deploy.yaml    #Grayscale publishing deployment file
│   ├── dev
│   │   └──  deploy.yaml    #Pre release deployment file
│   └── pro
│       └──  deploy.yaml    #Production deployment file
├── docker
│   ├── Dockerfile
│   └── html
│       └── index.html
├── nginx-ingress-init
Nginx ingress deployment file
│   │   ├── ClusterRoleBinding.yaml
│   │   ├── RoleBinding.yaml
│   │   ├── clusterRole.yaml
│   │   ├── defaultBackendService.yaml
│   │   ├── defaultBackendServiceaccount.yaml
│   │   ├── deployment.yaml
│   │   ├── nginxDefaultBackendDeploy.yaml
│   │   ├── roles.yaml
│   │   ├── service.yaml
│   │   └── serviceAccount.yaml
A package of nginx ingress Helm
│       └── nginx-ingress-1.36.3.tgz
Continuous deployment pipeline template
    ├── gray- deploy.json   #Grayscale publishing pipeline
    ├── gray- init.json     #Grayscale publishing initialization (first run)
    └── nginx-ingress- init.json   #Nginx ingress initialization (first run)

Gray environment and production environment are mainly composed ofdeployment/canary/deploy.yamlanddeployment/pro/deploy.yamlTo achieve, mainly to achieve two sets of environment:

  • Deployment
  • Service
  • Ingress

A / B test and grayscale are configured byIngressControl:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx  # nginx=nginx-ingress| qcloud=CLB ingress
    nginx.ingress.kubernetes . IO / Canary: "true" ා turn on grayscale
    nginx.ingress.kubernetes . IO / Canary by header: "location" ා A / B test case header key
    nginx.ingress.kubernetes . IO / Canary by header value: "Shenzhen" ා A / B test case header value
  name: my-ingress
  namespace: pro
spec:
  rules:
  - host: nginx-ingress.coding.pro
    http:
      paths:
      - backend:
          serviceName: nginx-canary
          servicePort: 80
        path: /

A / B test is mainly composed of notesnginx.ingress.kubernetes.io/canary-by-headerandnginx.ingress.kubernetes.io/canary-by-header-valueControl to match the key and value of the request header.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx  # nginx=nginx-ingress| qcloud=CLB ingress
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: 30
  name: my-ingress
  namespace: pro
spec:
  rules:
  - host: nginx-ingress.coding.pro
    http:
      paths:
      - backend:
          serviceName: nginx-canary
          servicePort: 80
        path: /

The grayscale is explained by the annotationnginx.ingress.kubernetes.io/canary-weightControl, the value range can be0-100, corresponding to the gray weight ratio. stayNginx-ingressThe main load balancing algorithm is load balancing algorithmWeighted pollingTo achieve the shunt.

The overall architecture is as follows:

Environmental preparation

1. For k8s cluster, Tencent cloud container service is recommended;
2. Open coding Devops to provide image building and pipeline deployment capabilities.

Practical steps

1. Clone the source code and push it to your own coding git repository

```
$ git clone https://e.coding.net/wangweicoding/nginx-ingress-gray/nginx-ingress-gray.git
$ git remote set-url origin https://you coding git
$ git add .
$ git commit -a -m 'first commit'
$ git push -u origin master
```

Attention, please senddeployment/devdeployment/canarydeployment/proFolder’sdeploy.yamlImage is changed to the image address of the product library.

2. Creating continuous integration pipeline
Use the custom build process to create a build plan and select theJenkinsfile

3. Add a cloud account and create a continuous deployment pipeline. Copy the pipeline JSON template of the project into the created pipeline (3)

To facilitate the use of templates, a continuous deployment pipeline application named nginx ingress is created

Create and continue to create a blank deployment process, copy the JSON template to the continuous deployment pipeline, and create three pipelines in total:

  • Nginx ingress init – used to initialize nginx ingress
  • Gray init – for initial initialization of the environment
  • Gray deploy – used to demonstrate grayscale publishing
    Note: Please select the cloud account of the above pipeline as your own cloud account. In addition, in the gray deploy pipeline, please reconfigure “start required products” and “trigger”.

4. Initialize nginx ingress (first run)
First runnginx-ingressThe pipeline will be deployed for you automaticallynginx-ingress。 After the deployment is successful, runkubectl get svc | grep nginx-ingress-controllerobtainNingx-ingressOfEXTERNAL-IPThis IP is the cluster request entry IP. And configure for this machineHost, easy to access.

5. Initialize grayscale publishing (first run)
First rungray-initThe pipeline will automatically deploy a complete set of environment, otherwise the automatic gray pipeline will fail.

6. Auto trigger grayscale Publishing
Now you can try to modify the projectdocker/html/index.htmlAfter the file is pushed, the build and continuous deployment will be automatically triggered. After the trigger, enter the “continuous deployment” page to view the deployment details and process.

summary

We mainly use itCoding continuous deploymentOfwait forBy setting the waiting time for different gray scale stages, the gray levels are automatically run one by one, and the automatic gray publishing without manual attendance is finally realized.

utilizewait forIn this stage, a smooth publishing process can be realized. Only when there is a problem in the release, manual intervention is needed. With the continuous deployment notification function, the current release status can be easily pushed to enterprise wechat, nailing and other collaborative tools.

In order to facilitate the display, the gray scale and waiting time are hard coded in the case. You can also use the “custom parameters” of the stage to realize the dynamic control of the gray scale and waiting time. According to the current release level, the gray scale and process control are input dynamically to make the release more flexible.

Production suggestion

TheNginx-ingressusedeploymentTo achieve.Nginx-ingressAsKubernetesThe high availability of the gateway determines the gateway’s high availabilityKubernetesHigh availability of clusters.

In the production environment, deployNginx-ingressThe following points are recommended:

  • It is recommended to deploy by daemonset to avoid node failure.
  • Through the label selector, theNginx-ingress-controllerDeployed in independent node nodes (such as high frequency, high network, high IO node) or low load node.
  • IfDeploymentCan be deployed forNginx-ingressConfigure HPA horizontal scaling.

Learn more about coding