Kubernetes v1.19.3 kubedm deployment notes
This article is based on my blog article: 002. Using kubedm to install kubernetes 1.17.0 https://www.cnblogs.com/zyxnh… And “kubernetes authoritative guide” and other materials to record and summarize after the deployment.
This paper is divided into three parts: (1) it records the preparation of infrastructure environment and the deployment of master, as well as the deployment of network plug-in flannel. (middle) records the deployment process of node and the deployment of some necessary containers. (next) introduce some monitoring and Devops related content.
One. preparation in advance
1. Infrastructure level:
At first, CentOS 8.2 was adopted, but Alibaba cloud’s source has not yet adapted to version 8. It has to be replaced with CentOS 7 2003, that is, CentOS Linux release 7.8.2003
One master and two followers.
[[email protected] ~]# cat /etc/redhat-release CentOS Linux release 7.8.2003 (Core)
2. Necessary software:
They are all installed by yum, and there is no intervention in the versions. The versions of kubedm, kubectl and kubelet are v1.19.3, and the docker version is 1.13.1. It should be noted that the cluster is completely created through kubedm, and this command will be used as a main line throughout the cluster construction. In addition to kubeadm, kubelet, kubectl and docker, the cluster is all run in the form of pod.
[[email protected] ~]# cat /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg [[email protected] ~]# yum clean all [[email protected] ~]# yum install -y kubelet kubeadm kubectl docker
3. Support application:
Because my system is the minimum installation, I need to install some other necessary support software, which can also use Yum: bash completion, strace, VIM, WGet, Chrony, ntpdate, net tools, etc.
[[email protected] ~]# yum install -y bash-completion strace vim wget chrony ntpdate net-tools
4. Network situation:
All hosts can communicate with each other and access the public network. The virtualization platform I use is VirtualBox. The virtual machine connects to the external network by using the network card enp0s8. When it comes to the platform, it should be network card 1: Nat. When the notebook accesses the communication between the virtual machine and the virtual machine, it should go through enp0s8. When it comes to the network card 2: host only vboxnet0.
All hosts should disable SELinux and shut down firewalld service.
[[email protected] ~]# getenforce Disabled [[email protected] ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)
6. Address forwarding:
Kube proxy needs to rely on IPVS as service proxy. Here, all host kernels need to turn on IPVS related functions.
[[email protected] ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- br_netfilter modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF [[email protected] ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && \ [[email protected] ~]# bash /etc/sysconfig/modules/ipvs.modules && \ [[email protected] ~]# lsmod | grep -E "ip_vs|nf_conntrack_ipv4"
The results are as follows
[[email protected] ~]# lsmod | grep -E "ip_vs|nf_conntrack_ipv4" nf_conntrack_ipv4 15053 10 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_i pv4,nf_conntrack_netlink,nf_conntrack_ipv4 libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
7. Swap partition:
All hosts should turn off the swap partition, because from version 1.18, kubernetes no longer supports swap partition. According to the data, the main consideration is performance, and the speed of swap partition is slow.
[[email protected] ~]# swapoff -a
Permanent shutdown (of course, restart is required to take effect)
[[email protected] ~]# grep swap /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0
8. Image dependency:
Through the command, you can see which images the current version of kubedm depends on. Pull them down first. The master will pull these of the following results, and the node needs to
To pull cube proxy and pause
[[email protected] ~]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.19.3 k8s.gcr.io/kube-controller-manager:v1.19.3 k8s.gcr.io/kube-scheduler:v1.19.3 k8s.gcr.io/kube-proxy:v1.19.3 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0
9. Later improvement:
In the later stage, you can write these initialization steps into a shell script, encapsulate each step into a function, and run it during initialization.
10. Special attention:
All the above steps need to be run on all nodes of the cluster (master and all nodes). Here, we can extend it. We can completely integrate the above 9
The original operating system is packaged as an image, which is equivalent to k8s v1.19.3
Operating system image baseline for private cloud or public cloud to create more k8s clusters.
Two. Deploy master
1. Initialization method and key points:
Log in to the master to execute this command. First, you need to generate the relevant configuration file. If you have it, you can run directly according to this configuration (it is recommended not to operate at this time. See Step 4 for the operation).
[[email protected] ~]# kubeadm init --config kubeadm-config.yml
You can also specify related parameters for initialization
[[email protected] ~]# kubeadm init --kubernetes-version-v1.19.3 --pod-network-cidr=10.244.0.0/16
At this time, I suggest opening several more terminals, all pointing to the master. If the command execution at the first terminal gets stuck, you can check the operation status of the other terminals through strace – p $PID. If there is a timeout, there must be a problem.
2. Pre flight inspection:
If it can’t be carried out several times or there is no printed result, it is recommended to carry out “pre flight inspection”
[[email protected] ~]# kubeadm init phase preflight
According to the result of printing, for example, docker is not installed, etcd path is not empty, etc.
3. Mirror preparation in advance:
The domestic network situation is a very realistic problem. It is better to pull down all the images first and modify the tag.
[[email protected] opt]# for i in kube-apiserver kube-controller-manager kube-scheduler kube-proxy; do echo $i && docker pull kubeimage/$i-amd64:v1.19.3; done [[email protected] opt]# for i in etcd-amd64 pause; do echo $i && docker pull kubeimage/$i-amd64:latest; done
Pull it down to modify the tag, because kubedm has strict requirements on the image, and the image service ratio needs to be k8s gcr.io Get closer
[[email protected] opt]# docker tag cdef7632a242 k8s.gcr.io/kube-proxy:v1.19.3 [[email protected] opt]# docker tag 9b60aca1d818 k8s.gcr.io/kube-scheduler:v1.19.3 [[email protected] opt]# docker tag aaefbfa906bd k8s.gcr.io/kube-controller-manager:v1.19.3 [[email protected] opt]# docker tag bfe3a36ebd25 k8s.gcr.io/coredns:1.7.0
In addition, we should also pay attention to the fact that we haven’t found it online for a long time etcd:3.4.13-0 This image, so casually pull the latest image, manually modify the version, is also to cheat kubedm. There’s no way. It’s too strict.
[[email protected] opt]# docker tag k8s.gcr.io/etcd:v3.4.2 k8s.gcr.io/etcd:v3.4.13-0
Later, you will find that the two images have the same image ID. you can delete the image that is not used, or you can not delete it. It doesn’t matter.
[[email protected] opt]# docker rmi docker.io/mirrorgooglecontainers/etcd-amd64
4. Operation initialization:
When the mirrors are ready, the situation becomes very simple. Direct initialization:
[[email protected] ~]# kubeadm init --config kubeadm-config.yml
If it goes well, there will be an echo on the screen:
[[email protected] opt]# kubeadm init --config kubeadm-config.yml W1102 22:29:15.980199 6337 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [192.168.56.99 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.99] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [192.168.56.99 localhost] and IPs [192.168.56.99 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [192.168.56.99 localhost] and IPs [192.168.56.99 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/ kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 12.503531 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node 192.168.56.99 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node 192.168.56.99 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.56.99:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:e0bd52dc0916972310f01a26c3e742aef11fe6088550749e7281b4edca795e7e
This echo can be written down and will be used in the future, especially the last line, which is the node join cluster command.
5. Check nodes and pods
[[email protected] opt]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.56.99 NotReady master 30m v1.19.3
In the case of pod, because all DNS pods are allocated to nodes, and nodes have not been deployed yet, the relevant pod status is waiting, and other pod status is running.
[[email protected] opt]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-f9fd979d6-dn7v4 1/1 Pending 1 4d22h kube-system coredns-f9fd979d6-mh2pn 1/1 Pending 1 4d22h kube-system etcd-192.168.56.99 1/1 Running 1 4d22h kube-system kube-apiserver-192.168.56.99 1/1 Running 1 4d22h kube-system kube-controller-manager-192.168.56.99 1/1 Running 3 4d22h kube-system kube-proxy-fkbfn 1/1 Running 1 4d21h kube-system kube-scheduler-192.168.56.99 1/1 Running 2 4d22h
6. Install the network plug-in flannel:
In fact, I didn’t know much about flannel at first. This time I rebuilt k8s and found the following features of flannel:
(1) It’s a virtual overlay.
(2) The function is to manage the network address of the container, ensure that the network address is unique, and the container can communicate with each other.
(3) In the process of data transmission, he can encapsulate the data packet again, and unpack the data packet after it arrives, that is to say, he guarantees the integrity of the data packet.
(4) He creates a new virtual network card in the host, and the network card communicates through the network card bridging the docker, that is to say, his operation is strongly dependent on the docker.
(5) Etcd ensures the consistency of configuration seen by flannel on all nodes. Each flannel is also monitoring changes in network related conditions on etcd.
After knowing these concepts, create them with YML files.
[[email protected] opt]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [[email protected] opt]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
It should be noted that the network configuration of pod subnet should be consistent with kubedm- config.yaml The network configuration of podsubnet in is consistent.
[[email protected] opt]# grep podSubnet /opt/kubeadm-config.yml Podsubnet: 10.244.0.0/16 flannel network [[email protected] opt]# grep Network /opt/kube-flannel.yml hostNetwork: true "Network": "10.244.0.0/16", hostNetwork: true
7. The master configuration is completed.
[[email protected] opt]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.56.99 Ready master 4d22h v1.19.3