Note: This paper mainly discusses NET6. 0 project runs the continuous integration process of dapr in k8s, but in fact, it is not the same process when dapr projects are deployed to k8s, but the yaml configuration file of k8s is different
Process selection
The continuous integration of projects based on dapr includes the following processes
- Compile and package the project
- Build dockerfile and push image
push image
To private warehouse - Preparing k8s deployed profiles
- Deploy images to k8s through kubectl
There are many options
– | Pipeline operation | Publish operation | advantage | shortcoming |
---|---|---|---|---|
1. Directly build image and publish | 1. Directly use docker build image 2 push image 3. Copy yaml to artifacts | K8s directly publishes yaml of corresponding version + specified image | Direct and simple operation | 1. Generate a large number of unnecessary images 2 Continuous integration takes a long time 3 Every sequel will be produced in Chengdu |
2. Build when publishing | 1. Dotnet publish zip only | 1. Build image / push image (optional) 2 K8s deployment + specified image | Single deployment slows down and multiple deployments speed up | The deployment process will be slower than directly accessing the image |
3. Only publish zip and build a special deployment image using volume | Dotnet publish zip only | Use the compiled image to modify the volume parameter | fast | Cross environment deployment will lead to excessive dependence on file system |
In view of the above advantages and disadvantages, I finally choseSecond
Compromise scheme, which will neither affect the speed of continuous integration nor produce too many images, but will produce redundant image construction time during deployment.
Project structure
- Add the following files to the project folder of each API to be published
- dapr.yaml
- Dockerfile
dapr.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: demo
namespace: dapr-api
labels:
app: .api
service: demo
spec:
replicas: 1
selector:
matchLabels:
service: demo
template:
metadata:
labels:
app: .api
service: demo
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "demo-api"
dapr.io/app-port: "80"
dapr.io/log-as-json: "true"
spec:
containers:
- name: demo-api
Image: warehouse name / address: 220310
ports:
- name: http
containerPort: 80
protocol: TCP
imagePullPolicy: IfNotPresent
---
kind: Service
apiVersion: v1
metadata:
name: demo-api
namespace: dapr-api
labels:
app: .api
service: demo
spec:
type: NodePort
selector:
service: demo
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30004
Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS final
WORKDIR /app
EXPOSE 80
COPY ["./projectfolder", "/app"]
ENTRYPOINT ["dotnet", "projectdll.dll"]
These two files need to be different for each project, which will be used later in the compilation and deployment process.
Pipeline continuous integration profile
trigger:
batch: true
pool:
name: Default
name: $(Date:yy)$(Date:MM)$(Date:dd)$(Rev:.r)
variables:
BuildConfiguration: 'Release'
steps:
- task: [email protected]
displayName: 'Check and Install .NET SDK 6.0'
inputs:
version: '6.0.x'
includePreviewVersions: false
- task: [email protected]
displayName: 'Publish to zip'
inputs:
command: publish
publishWebProjects: false
projects: './src/projectfolder/project.csproj'
arguments: '--configuration $(BuildConfiguration) --output $(build.artifactstagingdirectory) -v n'
zipAfterPublish: false
workingDirectory: '$(Build.SourcesDirectory)/src'
##Copy the two files above to artifact
- task: [email protected]
displayName: 'Copy dapr.yaml to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder: './src/${{ parameters.project }}/'
Contents: |
Dockerfile
dapr.yaml
TargetFolder: '$(build.artifactstagingdirectory)'
- task: [email protected]
displayName: 'Publish Artifact'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
Release release process configuration file
Create two new jobs for publishing process
Job 1 build image
variables:
Image: 'custom image name'
steps:
- task: [email protected]
displayName: buildAndPush
inputs:
containerRegistry: harbor
repository: '$(image)'
Dockerfile: '$(System.DefaultWorkingDirectory)/_dapr-demo/drop/Dockerfile'
tags: '$(Build.BuildNumber)'
Job 2 kubedeploy
variables:
Image: 'user defined image name, consistent with the above'
steps:
- task: [email protected]
displayName: deploy
inputs:
kubernetesServiceConnection: online
Namespace: deployment target namespace of '$(NS)' ## k8s
Strategy: Canary ## gray scale deployment strategy
percentage: 50
manifests: '$(System.DefaultWorkingDirectory)/_dapr-demo/drop/dapr.yaml'
containers: '$(harborUrl)/$(image):$(Build.BuildNumber)'
In this way, all pipelines are executed at the first deployment.
In the later rollback version, only the second management is performed manually, i.eKubeDeploy
that will do
Other processes
This process all depends on the configuration of azure Devops itself, not the agent environment configuration. If you rely on the agent environment, there are more methods.