Application of kubernetes multi cluster in open source project kubesphere

Time:2020-12-30

Kubernetes multi cluster scenarios

With the popularity of containers and the maturity of kubernetes, it is quite common for enterprises to run multiple kubernetes clusters. To sum up, the use scenarios of multiple clusters are mainly as follows.

Multi cluster scenarios

High availability

The business load can be distributed on multiple clusters, and a global VIP or DNS domain name can be used to send the request to the corresponding back-end cluster. When a cluster fails to process the request, the VIP or DNS records can be switched to a healthy cluster.

高可用

Low latency

In order to minimize the delay caused by the network, the cluster is deployed in multiple regions to transfer the user requests to the nearest cluster. For example, if three kubernetes clusters are deployed in Beijing, Shanghai and Guangzhou, users in Guangdong will forward their requests to the cluster in Guangzhou, which can reduce the network delay caused by geographical distance and maximize the consistent user experience.

Fault isolation

In general, multiple small clusters are easier to isolate faults than a large cluster. When the cluster has problems such as power failure, network failure, chain reaction caused by insufficient resources and so on, using multiple clusters can isolate the failure in a specific cluster and will not spread to other clusters.

Business isolation

Although kubernetes provides a namespace for application isolation, it is only a logical isolation, and there is still a problem of resource preemption. In order to achieve further isolation policy, we need to set additional resources. Multiple clusters can achieve complete physical isolation, and the security and reliability are higher than using namespace isolation. For example, different departments in an enterprise deploy their own independent clusters, and use multiple clusters to deploy the development / test / build environment respectively.

流水线

Avoid single vendor lock in

Kubernetes has become the de facto standard in the field of container choreography. Deploying clusters on different cloud service providers avoids putting all the eggs in one basket. It can migrate services at any time and scale between different clusters. The disadvantage is that the cost increases. Considering that the storage and network interfaces of kubernetes services provided by different manufacturers are different, business migration is not easy.

Multi cluster deployment

As mentioned above, the main use scenarios of multi cluster can solve many problems, but one of the problems it brings is the complexity of operation and maintenance. For a single cluster, deploying and updating applications are very straightforward. You can update yaml on the cluster directly. In the case of multiple clusters, although it can be updated one by one, how to ensure that the application load status of different clusters is consistent? How to do service discovery among different clusters? How to balance the load among clusters? The answer given by the community is Federation.

Federation v1

federation-v1

Federation scheme has gone through two versions, the earliest V1 version has been abandoned. In V1 version, the overall architecture is very similar to kubernetes’ own architecture. Federated APIs server is introduced to receive the request of creating multi cluster deployment before managing each cluster. The Federation controller manager in the control plane is responsible for creating the corresponding load on each cluster.

annotations

On the API level, the scheduling of Federated resources is realized through annotation, which is compatible with the original kubernetes API to the greatest extent. The advantage is that the existing code can be reused, and the user’s existing deployment files can be migrated without making too much changes. However, it also restricts the further development of Federation, which can not well evolve the API. At the same time, for each kind of federation resource, it needs a corresponding controller to realize multi cluster scheduling. The early Federation only supports a limited number of resource types.

Federation v2

federation-v2

Based on V1, the community developed Federation V2, namely kubefed. Kubefed uses CRD, which has been relatively mature in kubernetes, to define its own API specification, abandoning the previously used annotation method. The architecture of kubefed has changed a lot compared with that before. It has abandoned the federated API server / etcd that needed to be deployed independently. The control plane of kubefed adopts the popular CRD + controller implementation mode, which can be directly installed on the existing kubernetes cluster without additional deployment.

There are four resource types defined in V2

Cluster configuration: defines the registration information needed for the member cluster to join the control plane, including the name of the cluster, the address of the apiserver, and the credentials needed to create the deployment.

Type configuration: defines the resource objects that federation can handle. Each type configuration is a CRD object, which contains the following three configuration items:

  • Template template contains the resource object to be processed. If the processed object has no corresponding definition on the cluster to be deployed, the creation will fail. The federated deployment template in the following example contains all the information needed to create the deployment object.
  • Placement defines the name of the cluster to which the resource object will be created. You can use clusters and clusterselector.
  • As the name implies, override can be configured according to the content of different cluster overlay templates. In the following example, the number of replicas defined by the template is 1, and the number of replicas of Gondor cluster is covered in override. When deployed to Gondor cluster, it is no longer 1 replica in the template, but 4 replicas. In theory, all the content in the template overlay can be covered by a subset.

FederatedDeployment

Schedule: it mainly defines the distribution of applications among clusters. At present, it mainly involves replicaset and deployment. You can define the maximum and minimum number of replicas of the load in the cluster through schedule. This part is independent of the previous V1 through annotation scheduling.

Multiclusterdns: multiclusterdns realizes service discovery among multiple clusters. Service discovery in multiple clusters is much more complex than that in a single cluster. Kubeded uses servicednsrecord, progressdnsrecord and dnsendpoint objects to process service discovery in multiple clusters, which needs to be used with DNS.

In general, kubefed solves many problems in V1. CRD ensures the scalability of Federated resources to a great extent. Basically, all kubernetes resources can be deployed in multiple clusters, including user-defined CRD resources.

Kubefed also has some problems worthy of attention

  • The problem is that the whole control plane can not be implemented by using the single point controller + kubed. This is also discussed in the community. At present, kubefed uses a push reconcile method. When a federated resource is created, the corresponding controller on the control plane sends the resource object to the corresponding cluster. As for how to deal with the resource, it is the member cluster’s own business, which has nothing to do with the control plane. Therefore, when the kubefed control plane is not available, the existing application load will not be affected.

  • Maturity: kubefed community is not as active as kubernetes community. The iteration cycle of the version is long. At present, many functions are still in beta stage.

  • Too abstract: kubefed uses type configuration to define the resources to be managed. Different type configurations only have different templates. The advantage is that the corresponding processing logic can be unified, which is convenient for rapid implementation. The controllers corresponding to type configuration resources in kubefed are modular. However, the disadvantages are also obvious, and it can’t realize the personalized function for the special type. For example, for kubefed, the federateddeployment object only needs to generate the corresponding deployment object according to the template and override, and then create it to the cluster specified by placement. As for whether the deployment generates the corresponding pod on the cluster and whether the status is normal, it cannot be reflected on the federateddeployment, but can only be viewed on the corresponding cluster. At present, the community is also aware of this and is actively solving it. There are corresponding proposals.

Kubesphere multi cluster

The Federation mentioned above is a solution proposed by the community to solve the problem of multi cluster deployment. Multi cluster deployment can be realized by federating resources. For many enterprise users, the joint deployment of multiple clusters is not just needed. What’s more, they need to be able to manage the resources of multiple clusters at the same time.

It supports the functions of multi-dimensional query, event notification, etc. with kuvsphere, we can manage the cluster resources independently.

kubesphere-workflow

Permission management is based on kubefed, RBAC and open policy agent. The design of multi tenant is mainly to facilitate business departments, developers and operation and maintenance personnel to isolate and manage resources on demand in a unified management panel.

business

Overall structure

kubesphere-架构

The overall architecture of kubesphere multi cluster is shown in the figure. The cluster where the multi cluster control plane is located is called the host cluster, and the cluster managed by it is called the member cluster. In essence, it is a kubernetes cluster installed with kubesphere. The host cluster needs to be able to access the Kube apisever of the member cluster, and the network connectivity between the member clusters is not required. The management cluster host cluster is independent of the member cluster it manages. The member cluster does not know the existence of the host cluster. The advantage of this is that when the control plane fails, the member cluster will not be affected, and the deployed load can still operate normally and will not be affected.

The host cluster also serves as an API entry. The host cluster forwards the resource requests to the member cluster. The purpose of this is not only to facilitate aggregation, but also to facilitate unified authority authentication.

Authentication

It can be seen from the architecture diagram that the host cluster is responsible for synchronizing the identity and permission information between clusters, which is realized through the federated resources of kubefed. Federateduser / federatedrole / federatedrolebinding is created on the host cluster, and kubefed will push the user / role / rolebinding to the member cluster. Changes involving permissions will only be applied to the host cluster and then synchronized to the member cluster. The purpose of this is to maintain the integrity of each member cluster. The identity and permission data are saved on the member cluster, so that the cluster can independently authenticate and authorize, independent of the host cluster. In kubesphere multi cluster architecture, the role of host cluster is a coordinator of resources, not a dictator, devolving power to member cluster as much as possible.

Cluster connectivity

In kubesphere multi cluster, only the host cluster is required to be able to access the kubernetes API server of the member cluster, and there is no requirement for network connectivity at the cluster level. Kubesphere provides two ways to connect host and member clusters

Direct connection: if the Kube apisever address of the member cluster can be connected to any node on the host cluster, this direct connection method can be used. The member cluster only needs to provide the kubeconfig of the cluster. This method is applicable to most public cloud kubernetes services, or when the host cluster and the member cluster are in the same network.

Proxy connection: if the member cluster is in a private network and the Kube apisever address cannot be exposed, kubesphere provides a proxy way, namely tower. Specifically, an agent service will run on the host cluster. When a new cluster needs to be joined, the host cluster will generate all the credentials for joining. The agent running on the member cluster will connect to the agent service of the host cluster. After the connection is successful, a reverse proxy tunnel will be established. Due to the change of cluster proxy address, the next cluster proxy will be generated. The advantage is that the underlying details can be shielded. For the control plane, whether it is direct connection or proxy connection, the control plane is presented with a kubeconfig that can be used directly.

cluster-tunnel

API forwarding

api-转发

In kubesphere multi cluster architecture, the host cluster is responsible for the cluster entrance. All user request APIs are sent directly to the host cluster, and then the host cluster decides where to send the request. In order to be compatible with the previous API as much as possible, in the multi cluster environment, requests starting with / APIs / clusters / {cluster} in the API request path will be forwarded to {cluster} cluster, and / clusters / {cluster} will be removed. In this way, there is no difference between the requests received by the corresponding cluster and other requests, and no extra work is needed. for instance:

api-转发1

It will be forwarded to the cluster named Rohan, and the request will be processed as follows:

api-转发2

summary

The multi cluster problem is far more complex than expected. As can be seen from the Community Federation scheme, it has experienced two versions, but the official version has not been released up to now. There is a classic saying in the software field that multi cluster tools such as no silver bullet, kubefed and kubesphere can not and cannot solve all the problems of multi cluster. They still need to choose the right one according to the specific business scenario. I believe that with the passage of time, these tools will become more mature and can cover more use scenarios.

Q&A

Q: Is the multi cluster solution highly available or disaster tolerant? If it is highly available (involving cross cluster calls), how should we transform the micro service, such as spring cloud?

A: Multi cluster can be used for high availability and disaster recovery. The Federation scheme of the community is a common one, which is more inclined to disaster recovery. Generally speaking, high availability needs to be combined with specific business. Federation prefers a multi cluster deployment. How to synchronize the underlying data to support ha needs to be done by the business itself.

Q: Is the multi cluster management interface self-developed? If so, can you share the general R & D steps and interface operation diagram?

A: Yes, it’s self-developed. The operation interface can be understood through the following Videos:

  • https://www.bilibili.com/video/BV1Np4y1S7Lu/
  • https://www.bilibili.com/video/BV1SC4y187m9/
  • https://www.bilibili.com/video/BV1WT4y177LV/

Q: Boss, did your business encounter some network communication problems when running in kubernetes? Can you share the problem and general troubleshooting steps, thank you.

A: There are too many network problems. Take a look at this. It can cover most network problem scenarios. As for too difficult, not easy to encounter.

Q: How do you deal with HPA at the federal level?

A: At present, the multi cluster HPA is not involved in kubesphere multi cluster, which requires some transformation of the underlying monitoring facilities in roadmap.

Q: We have tried to use Federation to solve the problem of service discovery in multi cluster before, but we still give up and dare not use it. Can we share the solution of multi cluster service discovery?

A: Yes, kubefed service finds that compared with a single cluster, it is not so easy to use. We are also actively promoting this with the community. When we encounter problems, we still need to raise issues to the community to help the community understand the user scenarios and contribute to open source:)

Q: When kubesphere is under multi cloud management, how can it get through the public cloud and private cloud? In addition, will the number of pod copies in different clusters be dynamically adjusted? What is the basis?

A: The cluster managed by kubesphere is a kubesphere cluster. In fact, kubernetes is used to eliminate the heterogeneity between the public cloud and the private cloud. What needs to be solved is only the network problem. Today’s sharing mentioned how kubesphere multi cluster can achieve connectivity with member cluster. One way is suitable for private cloud. You can learn about it.

Q: What is the solution of HTTP in multi cluster? The single cluster that we are using is the ingress nginx – “service. How to deal with multi cluster?

A: If you are using kubefed in multi cluster, you can learn about multiclusterdns, which has the function of multi cluster increase. If it’s not, it’s just multiple independent kubernetes clusters, which need human flesh to configure.

This article is from the share of kubesphere team in dockone community. You can check the original.

Related links:

  • http://jsonpatch.com/
  • https://github.com/kubernetes-sigs/kubefed/issues/636
  • https://github.com/kubernetes-sigs/kubefed/blob/master/pkg/controller/federatedtypeconfig/controller.go
  • https://github.com/kubernetes-sigs/kubefed/pull/1237
  • https://github.com/kubesphere
  • https://github.com/kubesphere/tower
  • https://kubesphere.io/

About kubesphere

Kubesphere is a container hybrid cloud built on top of kubernetes. It provides full stack it automation operation and maintenance capabilities, and simplifies the “Devops” workflow of enterprises.

Kubesphere has beenAqara smart home, Benben life, Sina, PICC Life Insurance, Huaxia Bank, SPD Silicon Valley Bank, Sichuan Airlines, Sinopharm group, Weizhong bank, Zijin insurance, radore, zalopayAnd thousands of enterprises at home and abroad. Kubesphere provides operation and maintenance friendly wizard operation interface and rich enterprise level functions, including multi cloud and multi cluster management, kubernetes resource management, Devops (cigcd), application life cycle management, service mesh, multi tenant management, monitoring log, alarm notification, storage and network management, GPU support and other functions, which help enterprises to quickly build A powerful and functional container cloud platform.

Kubesphere website:https://kubesphere.io/
KubeSphere GitHub:https://github.com/kubesphere/kubesphere

KubeSphere 微信公众号

This article is published by openwrite, a blog publishing platform!