K8S performance test (1)–kubemark introduction and manual cluster construction

Time:2022-9-23

what is kubemark

Kubemark is a performance testing tool officially provided by K8S, which can simulate a large-scale K8S cluster with relatively small resources. Its main architecture is shown in the figure: an external K8S cluster (external cluster, which needs to have worker nodes) and a complete set of kubemark master control planes (which can be single-node or multi-node), that is, another K8S cluster ( kubemark cluster), but only master nodes, no worker nodes. We need to deploy and run the hollow pod in the external cluster. These pods will actively register with the master in the kubemark cluster and become the hollow node (virtual node) in the kubemark cluster. Then we can perform performance testing in the kubemark cluster. Although there is a slight error with the real cluster, it can represent the data of the real cluster.
K8S performance test (1)--kubemark introduction and manual cluster construction

The kubemark consists of two parts:
A real kubemark master control plane, which can be single-node or multi-node.
A group of Hollow nodes registered in the kubemark cluster is usually simulated by a Pod in another k8s cluster. The pod ip is the IP of the corresponding Hollow node in the kubemark cluster.

Generally speaking, there are two types of components in the communication between the kubernetes node and the master: kubelet, kubeproxy, and in the Hollow node:
kubeproxy: Use HollowProxy instead to simulate invalid kukeproxy related capabilities
kubelet: Use HollowKubelet instead to simulate invalid kubelet-related functions, while exposing the cadvisor port for heapster to capture data
Therefore, Hollow nodes simulate the behavior of kubelet and kubeproxy on real nodes, and can respond correctly to the master component, but do not actually create corresponding resources such as pods.

Summary: Create a hollow node in external kubernetes, automatically register it in kubemark kubernetes as a fake node, and then test in kubemark kubernetes to test the management plane (Master) performance of k8s.

kubemark project compilation and mirroring

The kubemark source code is located in the kubernetes project. Editing and creating a kubemark image is the first step in building a kubemark cluster.
Prepare an environment with go language version 1.17 or higher. Docker needs to be installed in the environment. If you use a domestic network, please configure the goproxy and docker image acceleration sources by yourself, and then execute the command:

mkdir -p $GOPATH/src/k8s.io/
cd $GOPATH/src/k8s.io/
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
git checkout v1.20.10 # Here is the version corresponding to kubemark k8s
./hack/build-go.sh cmd/kubemark/
cp $GOPATH/src/k8s.io/kubernetes/_output/bin/kubemark $GOPATH/src/k8s.io/kubernetes/cluster/images/kubemark/
cd $GOPATH/src/k8s.io/kubernetes/cluster/images/kubemark/
make build

The produced container image is staging-k8s.gcr.io/kubemark, which can be seen through the docker image ls command:
K8S performance test (1)--kubemark introduction and manual cluster construction
This image needs to be imported in the worker node of external k8s later.

kubemark cluster construction practice

If you are building from scratch, you need to build two sets of K8S clusters, one is external k8s, you need to build master+worker; the other is kubemark k8s, only master is needed. It can be built with kubespray, or built with kubeadm, which will not be described in this article.
For the convenience of demonstration, this test uses the container service provided by a domestic cloud vendor to create two sets of K8S clusters, the configurations are as follows:
External k8s: 3-node Master (2-core 4G cloud host + 50G data disk) + 2-node Worker (4-core 8G cloud host + 50G data disk)
kubemark k8s: 3-node Master (2-core 4G cloud host + 50G data disk)
K8S performance test (1)--kubemark introduction and manual cluster construction
Import the previously created kubemark image to the worker node of external k8s, log in to the master node of any external k8s, and perform the following operations:
1. Copy the config file of kubemark k8s (/root/.kube/config of any kubemark k8s master node) to the current path
2. Execute the following commands to create ns and secret in external k8s

kubectl create ns kubemark
kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig=config --from-file=kubeproxy.kubeconfig=config

3. Create a resource with the following yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: hollow-node
  namespace: kubemark
  labels:
    name: hollow-node
spec:
  replicas: 3
  selector:
    name: hollow-node
  template:
    metadata:
      labels:
        name: hollow-node
    spec:
      initContainers:
      - name: init-inotify-limit
        image: busybox:1.32
        imagePullPolicy: IfNotPresent
        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=1000']
        securityContext:
          privileged: true
      volumes:
      - name: kubeconfig-volume
        secret:
          secretName: kubeconfig
      - name: logs-volume
        hostPath:
          path: /var/log
      - name: no-serviceaccount-access-to-real-master
        emptyDir: {}
      containers:
      - name: hollow-kubelet
        image: staging-k8s.gcr.io/kubemark:v1.20.10
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 4194
        - containerPort: 10250
        - containerPort: 10255
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        command: [
          "/kubemark",
          "--morph=kubelet",
          "--name=$(NODE_NAME)",
          "--kubeconfig=/kubeconfig/kubelet.kubeconfig",
          "--log-file=/var/log/kubelet-$(NODE_NAME).log",
          "--logtostderr=false"
        ]
        volumeMounts:
        - name: kubeconfig-volume
          mountPath: /kubeconfig
          readOnly: true
        - name: logs-volume
          mountPath: /var/log
        resources:
          requests:
            cpu: 50m
            memory: 100M
        securityContext:
          privileged: true
      - name: hollow-proxy
        image: staging-k8s.gcr.io/kubemark:v1.20.10
        imagePullPolicy: IfNotPresent
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        command: [
          "/kubemark",
          "--morph=proxy",
          "--name=$(NODE_NAME)",
          "--kubeconfig=/kubeconfig/kubeproxy.kubeconfig",
          "--log-file=/var/log/kubeproxy-$(NODE_NAME).log",
          "--logtostderr=false"
        ]
        volumeMounts:
        - name: kubeconfig-volume
          mountPath: /kubeconfig
          readOnly: true
        - name: logs-volume
          mountPath: /var/log
        resources:
          requests:
            cpu: 50m
            memory: 100M

Wait for the status of the hollow pod to change to running
K8S performance test (1)--kubemark introduction and manual cluster construction

Go to kubemark k8s and confirm that the hollow pod has been registered to the cluster as a hollow node:
K8S performance test (1)--kubemark introduction and manual cluster construction

At this point, the kubemark cluster is built. If you want to adjust the number of hollow nodes, you can do it by scaling the rc resource replicas of the hollow pod on the scale external k8s.

I will use this cluster and the kbench tool in a later article to explain how to perform performance testing on the k8s control plane.