When using aws’s managed k8s–eks, you can’t avoid using aws’s LB and block storage. All the resources in the AWS public cloud can customize tags. The advantage of this is that the resources can be audited and counted in different dimensions according to the specific meaning of tag. For example, by department, by project, by environment (test,prod,uat) and so on. When you set the type of service to Loadbanlance, you can customize the tag by commenting on the following annotations.
apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "sgt:env=prod,sgt:group=SGT,sgt:project=hawkeye" labels: app: prometheus-server name: prometheus-server namespace: kube-system spec: ports: - name: http port: 9090 protocol: TCP targetPort: 9090 selector: app: prometheus-server type: LoadBalancer
Unfortunately, the block storage (ebs) in k8s
) does not support such an approach. Maybe aws thinks storage is cheaper and not worth auditing. But ebs itself supports tagging.
Therefore, in order to meet our company’s audit of storage during k8s landing, we designed the component of add-ebs-tags-controller.
Add – ebs – tags – controller explanation
It is well known that storage in k8s is realized through pv and PVC. Thus the add – ebs – tags – controller to monitor all the new PVC, then get to the new PVC annotations (volume. Beta. Kubernetes. IO/aws – block – storage – additional – resource – tags), the last call interface of the aws SDK, finish the work of play tag.
See github for the code.
The code is relatively simple and you can explore it yourself. The overall implementation idea is similar to other controllers. Each listens for the specified resource, and then processes Update and add and delete events, respectively.
Of course, I have to mention here that the core concept of k8s controller design is that the controller keeps making specific adjustments by listening for the actual status, so as to get closer to the desired status (spec).
The deployment of
The specific yaml deployment is as follows:
apiVersion: apps/v1 kind: Deployment metadata: name: add-ebs-tags-controller namespace: kube-system spec: replicas: 1 selector: matchLabels: k8s-app: add-ebs-tags-controller task: tool template: metadata: labels: task: tool k8s-app: add-ebs-tags-controller annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: serviceAccount: cluster-admin containers: - name: add-ebs-tags-controller image: iyacontrol/add-ebs-tags-controller:0.0.1 imagePullPolicy: IfNotPresent
Notice serviceAccount: cluster-admin. There is no admin role in the cluster. You can authorize rbac by yourself.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: prometheus-claim namespace: kube-system annotations: volume.beta.kubernetes.io/aws-block-storage-additional-resource-tags: "sgt:env=prod,sgt:group=SGT,sgt:project=hawkeye" spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi
After creating the results, go to aws UI to view the following: