You have to understand these principles of kubernetes

Time:2021-7-22

You have to understand these principles of kubernetes

Kubernetes has become the king of container choreography. It is a cluster choreography engine based on container. It has the ability of cluster expansion, rolling upgrade and rollback, elastic scaling, automatic healing, service discovery and other characteristics.

This article will take you to quickly understand kubernetes and what we are talking about when we talk about kubernetes.

Kubernetes architecture

You have to understand these principles of kubernetes

From a macro point of view, the overall architecture of kubernetes includes master, node and etcd.

Master is the master node, which is responsible for controlling the whole kubernetes cluster. It includes API server, scheduler, controller and other components. They all need to interact with etcd to store data.

  • API server: it mainly provides a unified entry for resource operation, thus shielding the direct interaction with etcd. Functions include security, registration and discovery.
  • Scheduler: responsible for scheduling pod to node according to certain scheduling rules.
  • Controller: resource control center to ensure that the resources are in the expected working state.

Node is the working node, which provides computing power for the whole cluster. It is the real running place of container, including running container, kubelet and Kube proxy.

  • Kubelet: the main work includes managing the life cycle of the container, monitoring with cdadvisor, health check and reporting the node status regularly.
  • Kube proxy: it mainly uses service to provide service discovery and load balancing within the cluster, and at the same time monitors the changes of service / endpoints and refreshes the load balancing.

Start by creating a deployment

You have to understand these principles of kubernetes

Deployment is a kind of controller resource for programming pod, which we will introduce later. Let’s take deployment as an example to see what each component in the architecture does in the process of creating deployment resources.

  • First, kubectl initiates a request to create a deployment
  • Apiserver receives the request to create deployment and writes the related resources to etcd; After that, the interaction between all components and apiserver / etcd is similar
  • The deployment controller list / watch resource changes and initiates a request to create a replica set
  • The replicaset controller list / watch resource changes and initiates a pod creation request
  • The scheduler detects the unbound pod resource and selects the appropriate node for binding through a series of matching and filtering
  • Kubelet finds that he needs to create a new pod on his node, which is responsible for the creation of pod and subsequent life cycle management
  • Kube proxy is responsible for initializing service related resources, including service discovery, load balancing and other network rules

So far, through the division and coordination of kubenetes components, the whole process from creating a deployment request to the normal operation of specific pods has been completed.

Pod

Among the numerous API resources of kubernetes, pod is the most important and basic, and the smallest deployment unit.

The first question we need to consider is, why do we need pod? Pod can be said to be a container design pattern. It is designed for those containers with “super intimate” relationship. We can imagine the deployment of war package, log collection and other scenarios of servlet containers. These containers often need to share network, shared storage and shared configuration, so we have the concept of pod.

You have to understand these principles of kubernetes

For pod, different containers can identify the external network space through infra container, and the same volume can share the storage naturally. For example, it corresponds to a directory on the host.

Container choreography

Container choreography is kubernetes’ master skill, so we need to know about it. There are many scheduling related control resources in kubernetes, such as deployment for stateless applications, statefulset for stateful applications, daemonset for choreography, job / cronjob for offline services, etc.

Let’s take the most widely used deployment as an example. The relationship among deployment, replica set and pod is a kind of layer by layer control. In short, replica sets control the number of pods, while deployment controls the version attribute of replica sets. This design pattern also provides the basis for the two most basic choreography actions, that is, the horizontal expansion and contraction of quantity control, and the update / rollback of version attribute control.

Horizontal expansion and contraction

You have to understand these principles of kubernetes

It’s very easy to understand how to expand and shrink horizontally. We just need to modify the number of pod copies controlled by replicaset, for example, from 2 to 3. Then we complete the action of horizontal expansion, and vice versa.

Update / rollback

Update / rollback shows the necessity of replicaset. For example, if we need to change the version of three instances from V1 to V2, the number of pod copies controlled by V1 replica will gradually change from 3 to 0, while the number of pod controlled by V2 replica will change from 0 to 3. When only V2 replica exists in deployment, the update is completed. The action of rollback is the opposite.

Rollover

It can be found that in the above example, when we update the application, the pods are always upgraded one by one, and at least two pods are available, and at most four pods provide services. The advantage of this “rolling update” is obvious. Once there is a bug in the new version, the remaining two pods can still provide services and facilitate quick rollback.

In practical application, we can control the rolling update strategy by configuring the rolling update strategy. Maxsurge indicates how many new pods can be created by the deployment controller; Maxunavailable refers to how many old pods can be deleted by the deployment controller.

Network in kubernetes

We understand how container choreography is accomplished, and how do containers communicate with each other?

When it comes to network communication, kubernetes must first have the foundation of “three links”

  • Node to pod can be connected
  • Node pods can communicate with each other
  • Pod between different nodes can be connected

You have to understand these principles of kubernetes

In short, different pods communicate with each other through cni0 / docker 0 bridge, and node can also access pod through cni0 / docker 0 bridge.

There are many ways to realize pod communication between different nodes, including the popular vxlan / host mode of flannel. Flannel gets the network information of other nodes through etcd, and creates a routing table for this node, so that different nodes can realize cross host communication.

Micro service

Before we know the next content, we need to know a very important resource object: service.

Why do we need service? In micro service, pod can correspond to instance, so service corresponds to a micro service. In the process of service invocation, service solves two problems

  • The IP of pod is not fixed, so it is unrealistic to use non fixed IP for network call
  • Service invocation needs to balance the load of different pods

Service selects the appropriate pod through the label selector to build an endpoints, namely pod load balancing list. In practice, we usually label the pod instance of the same micro service with app = XXX, and create a service with the tag selector app = XXX for the micro service.

Service discovery and network call in kubernetes

With the above “three links” network foundation, we can start how to realize the network call in the microservice architecture in kubernetes.

This part is actually about how kubernetes realizes service discovery, which has been explained more clearly. For more details, please refer to the above article. Here is a brief introduction.

Inter service call

The first is the east-west traffic call, that is, the call between services. This part mainly includes two call modes, namely clusterip mode and DNS mode.

Clusterip is a type of service. In this type mode, Kube proxy implements a form of VIP (virtual IP) for service through iptables / IPVS. Just visit the VIP to load balance the access to the pod behind the service.

You have to understand these principles of kubernetes

The figure above shows an implementation of clusterip. In addition, it also includes the userspace proxy mode (basically not used) and the IPVS mode (better performance).

DNS mode is easy to understand. For the service of clusterip mode, it has a record a, which is service-name.namespace-name.svc.cluster.local, pointing to the clusterip address. So in general, we can call service name directly.

Out of service access

You have to understand these principles of kubernetes

North south traffic, that is, external requests to access the kubernetes cluster, mainly includes three ways: nodeport, loadbalancer and ingress.

Nodeport is also a type of service. Through iptables, you can call a specific port on the host to access the service behind.

Loadbalancer is another service type, which is implemented by the load balancer provided by the public cloud.

We may need to create 100 nodeport / loadbalancers to access 100 services. We hope to access the internal kubernetes cluster through a unified external access layer, which is the function of inress. Inress provides a unified access layer, which matches different back-end services through different routing rules. Ingress can be regarded as “service of service”. The implementation of inress is often combined with nodeport and loadbalancer.

So far, we have a brief understanding of the relevant concepts of kubernetes, how it works, and how micro services run in kubernetes. So when we hear people talking about kubernetes, we can know what they are talking about.

Author: fredalxin
Address:https://fredal.xin/what-is-ku…

You have to understand these principles of kubernetes