20. Kubernetes (k8s) note pod resource scheduling (III) stain and tolerance

Time:2022-5-24

summary:

Taint taints are key value attribute data defined on the node, which is used to make the node refuse to run pod scheduling on it, unless the pod object has the tolerance of accepting node taints. Tolerance tolerances are key value attribute data defined on the pod object, which is used to configure its tolerable node stains, and the scheduler can only schedule the pod object to the node that can tolerate the node stains, as shown in the figure
20. Kubernetes (k8s) note pod resource scheduling (III) stain and tolerance

  • Whether a pod can be dispatched to a node depends on
  • Is there any stain on the node
  • There are stains on the nodes Can pod tolerate this stain

Stain and tolerance

The stain is defined in the node spec of the node, while the tolerance is defined in the podspec of the pod. They are all key value data, but they also support an additional effect effect tag. The syntax format is key = value: effect. The usage and format of key and value are similar to that of resource note information, while effect is used to define the exclusion level of pod objects. It mainly includes the following three types of utility identifiers

  • NoSchedule
    The new pod object that cannot tolerate this stain cannot be scheduled to the current node. It belongs to a mandatory constraint relationship, and the existing pod objects on the node will not be affected.
  • PreferNoSchedule
    The flexible constraint version of, that is, the new pod object that cannot tolerate this stain should not be scheduled to the current node as far as possible, but it is also allowed to accept the corresponding pod object when there are no other nodes for scheduling. Existing pod objects on the node are not affected.
  • NoExecute
    A new pod object that cannot tolerate this stain cannot be scheduled to the current node, which belongs to a mandatory constraint relationship, and when the existing pod object on the node no longer meets the matching rules due to the change of node stain or pod tolerance, the pod object will be expelled.

When defining tolerance on the pod object, it supports two operators: one is equivalent comparison equal, which means that the tolerance and stain must completely match above key, value and effect; The other is the existence judgment exists, which means that the key and effect of the two must match exactly, and the value field in the tolerance must use a null value.

Pod scheduling sequence

A node can be configured to use multiple stains, and a pod object can also have multiple tolerances. However, the following logic should be followed when checking the match between the two.

  1. First, deal with each stain that has a matching tolerance
  2. If there is a stain that uses the noschedule utility ID, the scheduling pod object will be rejected to this node
  3. If no one uses the noschedule utility ID, but at least one uses the prefernoscheduler, you should try to avoid scheduling the pod object to this node
  4. If at least one mismatched stain uses the noexecute utility ID, the node will immediately expel the pod object or not schedule to the given node; In addition, even if the tolerance can match the stain using the noexecute utility ID, if the tolerance time limit is defined by using the tolerationseconds attribute when defining the tolerance, it will also be expelled by the node after the time limit is exceeded.

For kubernetes clusters deployed with kubedm, the master node will automatically add stain information to prevent pod objects that cannot tolerate this stain from scheduling to this node. Therefore, pod objects manually created by users that do not deliberately add tolerance for this stain will not be scheduled to this node

Example 1: the pod is scheduled to the master and tolerates the master: noschedule ID

[ [email protected] -master Scheduler]#  kubectl describe node k8s-master. Org # View Master stain utility logo
...
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false

[[email protected] Scheduler]# cat tolerations-daemonset-demo.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonset-demo
  namespace: default
  labels:
    app: prometheus
    component: node-exporter
spec:
  selector:
    matchLabels:
      app: prometheus
      component: node-exporter
  template:
    metadata:
      name: prometheus-node-exporter
      labels:
        app: prometheus
        component: node-exporter
    spec:
      Tolerances: # tolerance tolerance master noschedule ID
      - key: node-role. kubernetes. IO / Master # is the key value
        Effect: noschedule # utility ID
        Operator: exists # exists
      containers:
      - image: prom/node-exporter:latest
        name: prometheus-node-exporter
        ports:
        - name: prom-node-exp
          containerPort: 9100
          hostPort: 9100

[[email protected] Scheduler]# kubectl apply -f  tolerations-daemonset-demo.yaml 

[[email protected] Scheduler]# kubectl get pod -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE             NOMINATED NODE   READINESS GATES
daemonset-demo-7fgnd                       2/2     Running   0          5m15s   10.244.91.106   k8s-node2.org    <none>           <none>
daemonset-demo-dmd47                       2/2     Running   0          5m15s   10.244.70.105   k8s-node1.org    <none>           <none>
daemonset-demo-jhzwf                       2/2     Running   0          5m15s   10.244.42.29    k8s-node3.org    <none>           <none>
daemonset-demo-rcjmv                       2/2     Running   0          5m15s   10.244.59.16    k8s-master.org   <none>           <none>

Example 2: add the effect utility ID noexecute to the node to expel all pods

[[email protected] Scheduler]# kubectl taint --help
Update the taints on one or more nodes.

  *  A taint consists of a key, value, and effect. As an argument here, it is expressed as key=value:effect.
  *  The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to
253 characters.
  *  Optionally, the key can begin with a DNS subdomain prefix and a single '/', like example.com/my-app
  *  The value is optional. If given, it must begin with a letter or number, and may contain letters, numbers, hyphens,
dots, and underscores, up to  63 characters.
  *  The effect must be NoSchedule, PreferNoSchedule or NoExecute.
  *  Currently taint can only apply to node.

Examples: # examples
  # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'.
  # If a taint with that key and effect already exists, its value is replaced as specified.
  kubectl taint nodes foo dedicated=special-user:NoSchedule
  
  # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists.
  kubectl taint nodes foo dedicated:NoSchedule-
  
  # Remove from node 'foo' all the taints with key 'dedicated'
  kubectl taint nodes foo dedicated-
  
  # Add a taint with key 'dedicated' on nodes having label mylabel=X
  kubectl taint node -l myLabel=X  dedicated=foo:PreferNoSchedule
  
  # Add to node 'foo' a taint with key 'bar' and no value
  kubectl taint nodes foo bar:NoSchedule


[[email protected] Scheduler]# kubectl get pod -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
daemonset-demo-7ghhd                         1/1     Running   0          23m   192.168.113.35   k8s-node1    <none>           <none>
daemonset-demo-cjxd5                         1/1     Running   0          23m   192.168.12.35    k8s-node2    <none>           <none>
daemonset-demo-lhng4                         1/1     Running   0          23m   192.168.237.4    k8s-master   <none>           <none>
daemonset-demo-x5nhg                         1/1     Running   0          23m   192.168.51.54    k8s-node3    <none>           <none>
pod-antiaffinity-required-697f7d764d-69vx4   0/1     Pending   0          8s    <none>           <none>       <none>           <none>
pod-antiaffinity-required-697f7d764d-7cxp2   1/1     Running   0          8s    192.168.51.55    k8s-node3    <none>           <none>
pod-antiaffinity-required-697f7d764d-rpb5r   1/1     Running   0          8s    192.168.12.36    k8s-node2    <none>           <none>
pod-antiaffinity-required-697f7d764d-vf2x8   1/1     Running   0          8s    192.168.113.36   k8s-node1    <none>           <none>
  • Label node 3 with noexecute utility and expel all pods of node
[[email protected] Scheduler]# kubectl taint node  k8s-node3 diskfull=true:NoExecute 
node/k8s-node3 tainted
[[email protected] Scheduler]# kubectl describe node k8s-node3
...
CreationTimestamp:  Sun, 29 Aug 2021 22:45:43 +0800
Taints:             diskfull=true:NoExecute
  • All pods of node node have been expelled, but because pod is defined as only one pod of the same type can exist in each node, it will be suspended and will not be created in other nodes
[[email protected] Scheduler]# kubectl get pod -o wide 
NAME                                         READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
daemonset-demo-7ghhd                         1/1     Running   0          31m     192.168.113.35   k8s-node1    <none>           <none>
daemonset-demo-cjxd5                         1/1     Running   0          31m     192.168.12.35    k8s-node2    <none>           <none>
daemonset-demo-lhng4                         1/1     Running   0          31m     192.168.237.4    k8s-master   <none>           <none>
pod-antiaffinity-required-697f7d764d-69vx4   0/1     Pending   0          7m45s   <none>           <none>       <none>           <none>
pod-antiaffinity-required-697f7d764d-l86td   0/1     Pending   0          6m5s    <none>           <none>       <none>           <none>
pod-antiaffinity-required-697f7d764d-rpb5r   1/1     Running   0          7m45s   192.168.12.36    k8s-node2    <none>           <none>
pod-antiaffinity-required-697f7d764d-vf2x8   1/1     Running   0          7m45s   192.168.113.36   k8s-node1    <none>           <none>
  • Delete the stain and the pod is recreated
[[email protected] Scheduler]# kubectl taint node  k8s-node3 diskfull- 
node/k8s-node3 untainted
[[email protected] Scheduler]# kubectl get pod -o wide
NAME                                         READY   STATUS              RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES
daemonset-demo-7ghhd                         1/1     Running             0          34m    192.168.113.35   k8s-node1    <none>           <none>
daemonset-demo-cjxd5                         1/1     Running             0          34m    192.168.12.35    k8s-node2    <none>           <none>
daemonset-demo-lhng4                         1/1     Running             0          34m    192.168.237.4    k8s-master   <none>           <none>
daemonset-demo-m6g26                         0/1     ContainerCreating   0          4s     <none>           k8s-node3    <none>           <none>
pod-antiaffinity-required-697f7d764d-69vx4   0/1     ContainerCreating   0          10m    <none>           k8s-node3    <none>           <none>
pod-antiaffinity-required-697f7d764d-l86td   0/1     Pending             0          9m1s   <none>           <none>       <none>           <none>
pod-antiaffinity-required-697f7d764d-rpb5r   1/1     Running             0          10m    192.168.12.36    k8s-node2    <none>           <none>
pod-antiaffinity-required-697f7d764d-vf2x8   1/1     Running             0          10m    192.168.113.36   k8s-node1    <none>           <none>

Reference documents:

https://www.cnblogs.com/ssgee…