Kubebuilder practice 6: build, deploy and run

Time:2022-5-1

Welcome to my GitHub

https://github.com/zq2599/blog_demos

Content: classification and summary of all original articles and supporting source code, involving Java, docker, kubernetes, Devops, etc;

Link article series

  1. One of kubebuilder’s actual combat: preparation
  2. Kubebuilder practice 2: experience kubebuilder for the first time
  3. Kubebuilder practice 3: a quick overview of basic knowledge
  4. Kube builder practice 4: operator requirements description and design
  5. Kubebuilder practice 5: operator coding
  6. Kubebuilder practice 6: build, deploy and run
  7. Kubebuilder practical battle 7: webhook
  8. Kubebuilder practice 8: Notes on knowledge points

Overview of this article

  • As the sixth in the Kube builder series,frontThe coding has been completed. Now it’s time to verify the function. Please ensure that your docker and kubernetes environment are normal, and then let’s complete the following operations together:
  1. Deploy CRD
  2. Run controller locally
  3. Create a new elastic web resource object through yaml file
  4. Verify whether the elastic web function is normal through the log and kubectl command
  5. The browser accesses the web to verify whether the business service is normal
  6. Modify the singlepodqps to see if elasticweb automatically adjusts the number of pods
  7. Modify totalqps to see if elasticweb automatically adjusts the number of pods
  8. Delete elastic web and see that the related services and deployment are automatically deleted
  9. Build a controller image and run the controller in kubernetes to verify whether the above functions are normal
  • The seemingly simple deployment verification operations add up to so many Well, no regrets, let’s start now;

Deploy CRD

  • Enter the directory where makefile is located from the console and execute the commandmake installCRD can be deployed to kubernetes:
[email protected] elasticweb % make install
/Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
kustomize build config/crd | kubectl apply -f -
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/elasticwebs.elasticweb.com.bolingcavalry configured
  • As can be seen from the above, the actual operation is to use kustomize toconfig/crdYaml resources under are merged and created in kubernetes;

  • You can use commandskubectl api-versionsVerify that CRD deployment is successful:

[email protected] elasticweb % kubectl api-versions|grep elasticweb
elasticweb.com.bolingcavalry/v1

Run controller locally

  • First try to verify the function of the controller in the simplest way. As shown in the figure below, macbook computer is my development environment. Directly use makefile in elasticweb project to run the controller code locally:

在这里插入图片描述

  • Enter the directory where the makefile file is located and execute the commandmake runYou can compile and run the controller:
[email protected] elasticweb % pwd
/Users/zhaoqin/github/blog_demos/kubebuilder/elasticweb
[email protected] elasticweb % make run
/Users/zhaoqin/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
/Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
go run ./main.go
2021-02-20T20:46:16.774+0800    INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": ":8080"}
2021-02-20T20:46:16.774+0800    INFO    setup   starting manager
2021-02-20T20:46:16.775+0800    INFO    controller-runtime.controller   Starting EventSource    {"controller": "elasticweb", "source": "kind source: /, Kind="}
2021-02-20T20:46:16.776+0800    INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
2021-02-20T20:46:16.881+0800    INFO    controller-runtime.controller   Starting Controller     {"controller": "elasticweb"}
2021-02-20T20:46:16.881+0800    INFO    controller-runtime.controller   Starting workers        {"controller": "elasticweb", "worker count": 1}

New elasticweb resource object

  • The controller responsible for processing elasticweb has been running. Next, start creating elasticweb resource objects with yaml files;

  • stayconfig/samplesUnder the directory, kubebuilder created a demo file for uselasticweb_v1_elasticweb.yamlHowever, the content of spec here is not the four fields defined by us. It needs to be changed to the following content:

apiVersion: v1
kind: Namespace
metadata:
  name: dev
  labels:
    name: dev
---
apiVersion: elasticweb.com.bolingcavalry/v1
kind: ElasticWeb
metadata:
  namespace: dev
  name: elasticweb-sample
spec:
  # Add fields here
  image: tomcat:8.0.18-jre8
  port: 30003
  singlePodQPS: 500
  totalQPS: 600
  • Several parameters of the above configuration are described as follows:
  1. The namespace used isdev
  2. The application deployed in this test is Tomcat
  3. The service uses the of the host30003Port exposed Tomcat service
  4. Assuming that a single pod can support 500 QPS, the QPS of external requests is 600
  • Execute commandkubectl apply -f config/samples/elasticweb_v1_elasticweb.yamlTo create an elasticweb instance in kubernetes:
[email protected] elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml
namespace/dev created
elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
  • Go to the controller window and find that many logs are printed. By analyzing the logs, it is found that the reconcile method is executed twice. In the first execution, resources such as deployment and service are created:
2021-02-21T10:03:57.108+0800    INFO    controllers.ElasticWeb  1. start reconcile logic        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.108+0800    INFO    controllers.ElasticWeb  3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil]       {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.210+0800    INFO    controllers.ElasticWeb  4. deployment not exists        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.313+0800    INFO    controllers.ElasticWeb  set reference   {"func": "createService"}
2021-02-21T10:03:57.313+0800    INFO    controllers.ElasticWeb  start create service    {"func": "createService"}
2021-02-21T10:03:57.364+0800    INFO    controllers.ElasticWeb  create service success  {"func": "createService"}
2021-02-21T10:03:57.365+0800    INFO    controllers.ElasticWeb  expectReplicas [2]      {"func": "createDeployment"}
2021-02-21T10:03:57.365+0800    INFO    controllers.ElasticWeb  set reference   {"func": "createDeployment"}
2021-02-21T10:03:57.365+0800    INFO    controllers.ElasticWeb  start create deployment {"func": "createDeployment"}
2021-02-21T10:03:57.382+0800    INFO    controllers.ElasticWeb  create deployment success       {"func": "createDeployment"}
2021-02-21T10:03:57.382+0800    INFO    controllers.ElasticWeb  singlePodQPS [500], replicas [2], realQPS[1000] {"func": "updateStatus"}
2021-02-21T10:03:57.407+0800    DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    INFO    controllers.ElasticWeb  1. start reconcile logic        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    INFO    controllers.ElasticWeb  3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000]      {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    INFO    controllers.ElasticWeb  9. expectReplicas [2], realReplicas [2] {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    INFO    controllers.ElasticWeb  10. return now  {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T10:03:57.407+0800    DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
  • Then use kubectl get command to check the resource object in detail. Everything meets the expectation. Elasticweb, service, deployment and pod are normal:
[email protected] elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml
namespace/dev created
elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
[email protected] elasticweb % kubectl get elasticweb -n dev                                 
NAME                AGE
elasticweb-sample   35s
[email protected] elasticweb % kubectl get service -n dev                                    
NAME                TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
elasticweb-sample   NodePort   10.107.177.158           8080:30003/TCP   41s
[email protected] elasticweb % kubectl get deployment -n dev                                 
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
elasticweb-sample   2/2     2            2           46s
[email protected] elasticweb % kubectl get pod -n dev                                        
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-l5thk   1/1     Running   0          50s
elasticweb-sample-56fc5848b7-lqjk5   1/1     Running   0          50s

Browser verification business function

  • The docker image used in this deployment operation is tomcat, which is very simple to verify. Opening the default page and seeing a cat proves that Tomcat has been started successfully. The IP address of my kubernetes host is192.168.50.75So you can access it with a browserhttp://192.168.50.75:30003, as shown in the following figure, the business function is normal:

在这里插入图片描述

Modify QPS of a single pod

  • If self optimization or external dependence changes (such as cache and database expansion), these may lead to the improvement of the QPS of the current service. Suppose that the QPS of a single pod is increased from 500 to 800, see if our operator can automatically adjust (the total QPS is 600, so the number of pods should be reduced from 2 to 1)

  • stayconfig/samples/New directory namedupdate_single_pod_qps.yamlThe contents of the document are as follows:

spec:
  singlePodQPS: 800
  • Execute the following command to update the QPS of a single pod from 500 to 800 (note that the parametertypeVery important (don’t forget):
kubectl patch elasticweb elasticweb-sample \
-n dev \
--type merge \
--patch "$(cat config/samples/update_single_pod_qps.yaml)"
  • At this time, go to the controller log, as shown in the following figure. Red box 1 indicates that the spec has been updated, and red box 2 indicates the number of pods calculated with the latest parameters, which is in line with expectations:

在这里插入图片描述

  • Check the pod with kubectl get command. It can be seen that the number has dropped to 1:
[email protected] elasticweb % kubectl get pod -n dev                                                                                       
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-l5thk   1/1     Running   0          30m
  • Remember to use the browser to check whether Tomcat is normal;

Modify total QPS

  • The external QPS is also changing frequently. Our operator also needs to adjust the pod instance in time according to the total QPS to ensure the overall service quality. Next, we will modify the total QPS to see whether the operator is effective:

  • stayconfig/samples/New directory namedupdate_total_qps.yamlThe contents of the document are as follows:

spec:
  totalQPS: 2600
  • Execute the following command to update the total QPS from 600 to 2600 (note that the parametertypeVery important (don’t forget):
kubectl patch elasticweb elasticweb-sample \
-n dev \
--type merge \
--patch "$(cat config/samples/update_total_qps.yaml)"
  • At this time, go to the controller log, as shown in the following figure. Red box 1 indicates that the spec has been updated, and red box 2 indicates the number of pods calculated with the latest parameters, which is in line with expectations:

在这里插入图片描述

  • Check the pod with kubectl get command. It can be seen that the number of pods has increased to 4, and the supporting QPS of 4 PD is 3200, which meets the current requirements of 2600:
[email protected] elasticweb % kubectl get pod -n dev
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-8n7tq   1/1     Running   0          8m22s
elasticweb-sample-56fc5848b7-f2lpb   1/1     Running   0          8m22s
elasticweb-sample-56fc5848b7-l5thk   1/1     Running   0          48m
elasticweb-sample-56fc5848b7-q8p5f   1/1     Running   0          8m22s
  • Remember to use the browser to check whether Tomcat is normal;
  • Smart, you must think it’s too low to use this method to adjust the number of pods, er You’re right. It’s really low, but you can develop an application yourself. After receiving the current QPS, you can automatically call client go to modify the totalqps of elasticweb and let the operator adjust the number of pods in time. This is barely automatic bar

Delete validation

  • At present, there are service, deployment, pod and elasticweb resource objects in the whole dev namespace. If you want to delete them all, just delete elasticweb, because service and deployment are associated with elasticweb. The code is shown in the red box below:

在这里插入图片描述

  • Execute the command to delete elasticweb:
kubectl delete elasticweb elasticweb-sample -n dev
  • Go to view other resources and they are automatically deleted:
[email protected] elasticweb % kubectl delete elasticweb elasticweb-sample -n dev
elasticweb.elasticweb.com.bolingcavalry "elasticweb-sample" deleted
[email protected] elasticweb % kubectl get pod -n dev                            
NAME                                 READY   STATUS        RESTARTS   AGE
elasticweb-sample-56fc5848b7-9lcww   1/1     Terminating   0          45s
elasticweb-sample-56fc5848b7-n7p7f   1/1     Terminating   0          45s
[email protected] elasticweb % kubectl get pod -n dev
NAME                                 READY   STATUS        RESTARTS   AGE
elasticweb-sample-56fc5848b7-n7p7f   0/1     Terminating   0          73s
[email protected] elasticweb % kubectl get pod -n dev
No resources found in dev namespace.
[email protected] elasticweb % kubectl get deployment -n dev
No resources found in dev namespace.
[email protected] elasticweb % kubectl get service -n dev   
No resources found in dev namespace.
[email protected] elasticweb % kubectl get namespace dev 
NAME   STATUS   AGE
dev    Active   97s

Build image

  1. Earlier, we tried all the functions of the controller in the development environment. In the actual production environment, the controller is not so independent of kubernetes, but runs in kubernetes in the state of pod. Next, we try to compile and build the controller code into a docker image, and then run it on kubernetes;
  2. The first thing to do is to execute on the controller console in frontCtrl+C, stop that controller;
  3. Here is a requirement that you have an image warehouse that kubernetes can access, such as harbor in the LAN or public hub docker. COM, I chose hub for the convenience of operation docker. COM, the premise of using it is to have hub docker. COM;
  4. On the Kube builder computer, open a console and executedocker loginLog in with the command and enter hub docker. COM, so you can execute the docker push command on the current console to push the image to hub docker. Com (the network of this website is very poor, and it may take several times to log in to succeed);
  5. Execute the following command to build a docker image and push it to hub docker. COM, the image name isbolingcavalry/elasticweb:002
make docker-build docker-push IMG=bolingcavalry/elasticweb:002
  1. hub. docker. Com’s network condition is not generally poor. The docker on kubebuilder computer must set image acceleration. If the above command encounters timeout failure, please try again several times. In addition, you will download many go module dependencies during the construction process, which also requires you to wait patiently. It is also easy to encounter network problems and need to try again many times. Therefore, it is best to use the Gabor service built in the LAN;
  2. Finally, after the command is executed successfully, the output is as follows:
[email protected] elasticweb % make docker-build docker-push IMG=bolingcavalry/elasticweb:002
/Users/zhaoqin/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
/Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
go test ./... -coverprofile cover.out
?       elasticweb      [no test files]
?       elasticweb/api/v1       [no test files]
ok      elasticweb/controllers  8.287s  coverage: 0.0% of statements
docker build . -t bolingcavalry/elasticweb:002
[+] Building 146.8s (17/17) FINISHED                                                                                                                                                                                                  
 => [internal] load build definition from Dockerfile                                                                                                                                                                             0.1s
 => => transferring dockerfile: 37B                                                                                                                                                                                              0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                0.0s
 => => transferring context: 2B                                                                                                                                                                                                  0.0s
 => [internal] load metadata for gcr.io/distroless/static:nonroot                                                                                                                                                                1.8s
 => [internal] load metadata for docker.io/library/golang:1.13                                                                                                                                                                   0.7s
 => [builder 1/9] FROM docker.io/library/golang:[email protected]:8ebb6d5a48deef738381b56b1d4cd33d99a5d608e0d03c5fe8dfa3f68d41a1f8                                                                                                     0.0s
 => [stage-1 1/3] FROM gcr.io/distroless/static:[email protected]:b89b98ea1f5bc6e0b48c8be6803a155b2a3532ac6f1e9508a8bcbf99885a9152                                                                                                  0.0s
 => [internal] load build context                                                                                                                                                                                                0.0s
 => => transferring context: 14.51kB                                                                                                                                                                                             0.0s
 => CACHED [builder 2/9] WORKDIR /workspace                                                                                                                                                                                      0.0s
 => CACHED [builder 3/9] COPY go.mod go.mod                                                                                                                                                                                      0.0s
 => CACHED [builder 4/9] COPY go.sum go.sum                                                                                                                                                                                      0.0s
 => CACHED [builder 5/9] RUN go mod download                                                                                                                                                                                     0.0s
 => CACHED [builder 6/9] COPY main.go main.go                                                                                                                                                                                    0.0s
 => CACHED [builder 7/9] COPY api/ api/                                                                                                                                                                                          0.0s
 => [builder 8/9] COPY controllers/ controllers/                                                                                                                                                                                 0.1s
 => [builder 9/9] RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go                                                                                                                      144.5s
 => CACHED [stage-1 2/3] COPY --from=builder /workspace/manager .                                                                                                                                                                0.0s
 => exporting to image                                                                                                                                                                                                           0.0s
 => => exporting layers                                                                                                                                                                                                          0.0s
 => => writing image sha256:622d30aa44c77d93db4093b005fce86b39d5ba5c6cd29f1fb2accb7e7f9b23b8                                                                                                                                     0.0s
 => => naming to docker.io/bolingcavalry/elasticweb:002                                                                                                                                                                          0.0s
docker push bolingcavalry/elasticweb:002
The push refers to repository [docker.io/bolingcavalry/elasticweb]
eea77d209b68: Layer already exists 
8651333b21e7: Layer already exists 
002: digest: sha256:c09ab87f6fce3d85f1fda0ffe75ead9db302a47729aefd3ef07967f2b99273c5 size: 739
  1. Go to hub docker. Com website, as shown in the figure below, the new image has been uploaded, so as long as any machine can access the Internet, it can pull the image to local use:

在这里插入图片描述

  1. After the image is ready, execute the following command to deploy the controller in the kubernetes environment:
make deploy IMG=bolingcavalry/elasticweb:002
  1. Next, create the elasticweb resource object as before to verify whether all resources are created successfully:
[email protected] elasticweb % make deploy IMG=bolingcavalry/elasticweb:002
/Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/manager && kustomize edit set image controller=bolingcavalry/elasticweb:002
kustomize build config/default | kubectl apply -f -
namespace/elasticweb-system created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/elasticwebs.elasticweb.com.bolingcavalry configured
role.rbac.authorization.k8s.io/elasticweb-leader-election-role created
clusterrole.rbac.authorization.k8s.io/elasticweb-manager-role created
clusterrole.rbac.authorization.k8s.io/elasticweb-proxy-role created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/elasticweb-metrics-reader created
rolebinding.rbac.authorization.k8s.io/elasticweb-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/elasticweb-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/elasticweb-proxy-rolebinding created
service/elasticweb-controller-manager-metrics-service created
deployment.apps/elasticweb-controller-manager created
[email protected] elasticweb % 
[email protected] elasticweb % 
[email protected] elasticweb % 
[email protected] elasticweb % 
[email protected] elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml 
namespace/dev created
elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
[email protected] elasticweb % kubectl get service -n dev  
NAME                TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
elasticweb-sample   NodePort   10.96.234.7           8080:30003/TCP   13s
[email protected] elasticweb % kubectl get deployment -n dev
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
elasticweb-sample   2/2     2            2           18s
[email protected] elasticweb % kubectl get pod -n dev     
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-559lw   1/1     Running   0          22s
elasticweb-sample-56fc5848b7-hp4wv   1/1     Running   0          22s
  1. That’s not enough! There is another important information that we need to check the controller log to see which pods are available first:
[email protected] elasticweb % kubectl get pods --all-namespaces
NAMESPACE           NAME                                             READY   STATUS    RESTARTS   AGE
dev                 elasticweb-sample-56fc5848b7-559lw               1/1     Running   0          68s
dev                 elasticweb-sample-56fc5848b7-hp4wv               1/1     Running   0          68s
elasticweb-system   elasticweb-controller-manager-5795d4d98d-t6jvc   2/2     Running   0          98s
kube-system         coredns-7f89b7bc75-5pdwc                         1/1     Running   15         20d
kube-system         coredns-7f89b7bc75-nvbvm                         1/1     Running   15         20d
kube-system         etcd-hedy                                        1/1     Running   15         20d
kube-system         kube-apiserver-hedy                              1/1     Running   15         20d
kube-system         kube-controller-manager-hedy                     1/1     Running   16         20d
kube-system         kube-flannel-ds-v84vc                            1/1     Running   22         20d
kube-system         kube-proxy-hlppx                                 1/1     Running   15         20d
kube-system         kube-scheduler-hedy                              1/1     Running   16         20d
test-clientset      client-test-deployment-7677cc9669-kd7l7          1/1     Running   9          9d
test-clientset      client-test-deployment-7677cc9669-kt5rv          1/1     Running   9          9d
  1. It can be seen that the pod name of the controller iselasticweb-controller-manager-5795d4d98d-t6jvc, you can view the log by executing the following command-c managerYou need to specify two parameters in the pod container to see the log correctly:
kubectl logs -f \
elasticweb-controller-manager-5795d4d98d-t6jvc \
-c manager \
-n elasticweb-system
  1. See the familiar business log again:
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  1. start reconcile logic        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil]       {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  4. deployment not exists        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  set reference   {"func": "createService"}
2021-02-21T08:52:27.064Z        INFO    controllers.ElasticWeb  start create service    {"func": "createService"}
2021-02-21T08:52:27.107Z        INFO    controllers.ElasticWeb  create service success  {"func": "createService"}
2021-02-21T08:52:27.107Z        INFO    controllers.ElasticWeb  expectReplicas [2]      {"func": "createDeployment"}
2021-02-21T08:52:27.107Z        INFO    controllers.ElasticWeb  set reference   {"func": "createDeployment"}
2021-02-21T08:52:27.107Z        INFO    controllers.ElasticWeb  start create deployment {"func": "createDeployment"}
2021-02-21T08:52:27.119Z        INFO    controllers.ElasticWeb  create deployment success       {"func": "createDeployment"}
2021-02-21T08:52:27.119Z        INFO    controllers.ElasticWeb  singlePodQPS [500], replicas [2], realQPS[1000] {"func": "updateStatus"}
2021-02-21T08:52:27.198Z        DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        INFO    controllers.ElasticWeb  1. start reconcile logic        {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        INFO    controllers.ElasticWeb  3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000]      {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        INFO    controllers.ElasticWeb  9. expectReplicas [2], realReplicas [2] {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        INFO    controllers.ElasticWeb  10. return now  {"elasticweb": "dev/elasticweb-sample"}
2021-02-21T08:52:27.198Z        DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
  1. Then use the browser to verify that Tomcat has been started successfully;

Unloading and cleaning

  • After the experience, if you want to clean up all the resources created earlier (Note: clean up)resourcesNo, NoResource object), you can execute the following commands:
make uninstall
  • So far, the design, development, deployment and verification process of the whole operator has been completed. I hope this article can bring you some references in the development process of your operator;

You’re not alone. Xinchen’s original accompanies you all the way

  1. Java series
  2. Spring series
  3. Docker series
  4. Kubernetes series
  5. Database + middleware series
  6. Devops series

Welcome to official account: programmer Xinchen

Wechat search “programmer Xinchen”. I’m Xinchen. I look forward to traveling with you in the Java World
https://github.com/zq2599/blog_demos