Author: Wang Wei, coding Devops backend development engineer, with many years of R & D experience, is a veteran fan of cloud native, Devops and kubernetes, and a member of the Chinese community of servicemesh service grid. Kubernetes CKA, ckad certification.
The application on kubernetes realizes gray-scale publishing. The simplest scheme is to introduce official
We deploy two sets of deployment and services to represent the gray environment and production environment respectively. Through the load balancing algorithm, we realize the diversion of the two sets of environments according to the gray scale, and then realize the gray release.
The common practice is to modify the image after the project packages the new image
yamlMirror version of the file, executing
kubectl applyTo update the service. If the publishing process also needs gray publishing, you can control gray publishing by adjusting the weight of the configuration files of the two sets of services. This method is inseparable from manual execution. If the number of projects is large and the time span of gray scale is too long, the probability of human misoperation will increase greatly and rely too much on manual execution
DevOpsEngineering practice is intolerable.
So, is there a way to realize automatic grayscale without human intervention? For example, after the code is updated, it is automatically released to the pre release and gray environment, and the gray scale proportion is automatically increased from 10% to 100% in one day, and can be terminated at any time. After the gray scale passes, it is automatically released to the production environment?
The answer is yes, use
CODING DevOpsCan meet such needs.
Architecture and principle of nginx ingress
Nginx-ingressArchitecture and implementation principle of:
ServiceReceive cluster traffic and forward the traffic to
Nginx-ingressCheck the configured policy in the pod and forward it to the target
ServiceFinally, the traffic is forwarded to the business container.
NginxWe need to configure
confFile policy. but
Nginx-ingress-ControllerWill be native
yamlThe configuration file is transformed when we configure
yamlAfter the policy of the file,
Nginx-ingress-ControllerIt will be transformed, and the policy will be updated dynamically and reload dynamically
Nginx Pod, realize automatic management.
Nginx-ingress-ControllerHow can we dynamically perceive the change of cluster strategy? There are many methods, which can be obtained dynamically through the webhook admission interceptor or or through the interaction between serviceaccount and kubernetes API.
Nginx-ingress-ControllerUse the latter. So we’re deploying
Nginx-ingressWe’ll find out
DeploymentThe service account of pod is specified and rolebinding is implemented, so that pod can interact with kubernetes API.
Implementation scheme Preview
In order to achieve the above objectives, we designed the following continuous deployment pipeline.
This continuous deployment pipeline mainly realizes the following steps:
1. Automatically deploy to pre release environment
2. Is a / B test performed
3. Automatic grayscale publishing (3 times automatically and gradually increase the grayscale)
4. Publish to production environment
At the same time, this case also demonstrates the steps from git submitting code to automatically triggering continuous integration:
1. After submitting the code, continuous integration is triggered to automatically build the image
2. After the image is built, the image is automatically pushed to the artifact library
3. Trigger continuous deployment
1. After submitting the code, continuous integration is triggered, and the image is automatically built and pushed to the product library
2. Trigger continuous deployment and publish to pre release environment
3. Manual confirmation: conduct a / B test (or skip and directly enter automatic grayscale)
During a / B test, only the header with location = Shenzhen can access the new version, and other users access the production environment as the old version.
4. Manual confirmation: whether to automatically release the gray scale (automatically carry out 3 rounds to gradually increase the gray scale, with an interval of 30s for each round)
First grayscale: the grayscale of the new version is 30%. At this time, about 30% of the traffic accessing the production environment enters the grayscale environment of the new version:
After 30s, automatically perform the second round of gray scale: 60% gray scale of the new version:
After 60s, automatically perform the third round of gray scale: 90% gray scale of the new version:
In this case, we have configured automatic grayscale publishing, which will be carried out in a progressive manner for 3 times, increasing the proportion by 30% each time, and automatically enter the next grayscale stage after lasting for 30s each time. At different gray levels, it will be found that the probability of requesting a new version is higher and higher. Progressive grayscale can be configured arbitrarily according to business needs. For example, grayscale can be automatically performed 10 times for 1 day until it is released to the production environment without manual attendance.
5. The grayscale is completed and released to the production environment in 30s
Project source code and principle analysis
Project source code address:https://wangweicoding.coding….
Key words: jenkinsfile # continuous integration script ├── deployment │ ├── canary │ │ └ - deploy.yaml # gray publishing deployment file │ ├── dev │ │ └ - deploy.yaml # pre release deployment file │ └── pro │ └ - deploy.yaml # production deployment file ├── docker │ ├── Dockerfile │ └── html │ └── index.html ├── nginx-ingress-init │♪ nginx ingress deployment # nginx ingress deployment file │ │ ├── ClusterRoleBinding.yaml │ │ ├── RoleBinding.yaml │ │ ├── clusterRole.yaml │ │ ├── defaultBackendService.yaml │ │ ├── defaultBackendServiceaccount.yaml │ │ ├── deployment.yaml │ │ ├── nginxDefaultBackendDeploy.yaml │ │ ├── roles.yaml │ │ ├── service.yaml │ │ └── serviceAccount.yaml │ └ - nginx ingress helm # nginx ingress helm package │ └── nginx-ingress-1.36.3.tgz └ - Pipeline # continuously deploys pipeline templates Gray-deploy.json # gray publishing pipeline Gray-init.json # gray publishing initialization (first run) └ - nginx-ingress-init.json # nginx-ingress initialization (first run)
Gray environment and production environment are mainly composed of
deployment/pro/deploy.yamlIt mainly realizes two sets of environment:
A / B test and grayscale are configured by
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx # nginx=nginx-ingress| qcloud=CLB ingress Nginx.ingress.kubernetes.io/canary: "true" # enable grayscale Nginx.ingress.kubernetes.io/canary-by-header: "location" # A / B test case header key Nginx.ingress.kubernetes.io/canary-by-header-value: "Shenzhen" # A / B test case header value name: my-ingress namespace: pro spec: rules: - host: nginx-ingress.coding.pro http: paths: - backend: serviceName: nginx-canary servicePort: 80 path: /
A / B test mainly consists of notes
nginx.ingress.kubernetes.io/canary-by-header-valueControl to match the key and value of the request header.
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx # nginx=nginx-ingress| qcloud=CLB ingress nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: 30 name: my-ingress namespace: pro spec: rules: - host: nginx-ingress.coding.pro http: paths: - backend: serviceName: nginx-canary servicePort: 80 path: /
The grayscale is annotated by
nginx.ingress.kubernetes.io/canary-weightControl, the value range can be
0-100, corresponding to the gray weight proportion. stay
Nginx-ingress, the load balancing algorithm is mainly composed of
Weighted pollingAlgorithm to achieve shunting.
The overall architecture is shown in the figure below:
1. Clone the source code and push it to your coding git warehouse
$ git clone https://e.coding.net/wangweicoding/nginx-ingress-gray/nginx-ingress-gray.git $ git remote set-url origin https://you coding git $ git add . $ git commit -a -m 'first commit' $ git push -u origin master
Please note that before pushing
deploy.yamlImage is modified as the image address of your own product library.
2. Create continuous integration pipeline
Use the custom build process to create a build plan and select a code warehouse to use
3. Add a cloud account, create a continuous deployment pipeline, and copy the pipeline JSON template of the project into the created pipeline (3)
To facilitate the use of the template, create a continuous deployment pipeline application named nginx ingress
Create and continue to create a blank deployment process. Copy the JSON template to the continuous deployment pipeline. A total of three pipelines are created:
- Nginx ingress init – used to initialize nginx ingress
- Gray init – used to initialize the environment for the first time
- Gray deploy – used to demonstrate gray publishing
Note: Please select the cloud account of the above pipeline as your own cloud account. In addition, in the gray deploy pipeline, please reconfigure “start required products” and “trigger”.
4. Initialize nginx ingress (first run)
nginx-ingressThe pipeline will be deployed for you automatically
nginx-ingress。 After successful deployment, run
kubectl get svc | grep nginx-ingress-controllerobtain
EXTERNAL-IP, this IP is the cluster request entry IP. And configure for this machine
Host, easy to access.
5. Initialize grayscale publishing (first run)
gray-initThe pipeline will automatically deploy a complete environment, otherwise the automatic grayscale pipeline will fail.
6. Automatically trigger grayscale Publishing
Now you can try to modify the project
docker/html/index.htmlAfter the file is pushed, the build and continuous deployment will be triggered automatically. After triggering, enter the “continuous deployment” page to view the deployment details and process.
We mainly use
Coding continuous deploymentof
wait forStage, by setting the waiting time for the stages with different gray scale, the gray stage is automatically run one by one, and finally the automatic gray release without manual duty is realized.
wait forStage, a smooth publishing process can be realized. Only when there is a problem with publishing, manual intervention is required. With the continuous deployment notification function, you can easily push the current release status to enterprise wechat, nailing and other collaboration tools.
In order to facilitate the display, the gray scale and waiting time are hard coded in the case. You can also use the “user-defined parameters” in the stage to dynamically control the gray scale and waiting implementation. For the current release level, the gray scale and process control are dynamically input to make the release more flexible.
KubernetesThe edge gateway of the cluster undertakes all the entrance traffic, and its high availability directly determines
KubernetesHigh availability of clusters.
In a production environment, deploy
Nginx-ingressThe following points are recommended:
- It is recommended to use the daemonset deployment method to avoid node failure.
- With the label selector, the
Nginx-ingress-controllerDeployed in independent node nodes (such as high frequency, high network, high IO nodes) or low load nodes.
- If used
DeploymentCan be deployed as
Nginx-ingressConfigure HPA horizontal expansion.