Use kubeeye to escort your k8s cluster


Author: kaliarch (Xue Lei), product leader of a cloud MSP service provider, familiar with enterprise level high availability / high concurrency architecture, including hybrid Cloud Architecture and remote disaster, proficient in enterprise Devops transformation and optimization, familiar with shell / Python / go and other development languages, familiar with kubernetes, docker, cloud native, micro service architecture, etc.


Kubeeye is a kubernetes security and configuration problem detection tool, which is used for configuration detection of business applications deployed in k8s clusterOPANode deployment for clusterNode-Problem-DetectorAt the same time, in addition to the built-in predefined rules according to most common scenarios in the industry, the system also supports user-defined rules for cluster detection.


Kubeeye obtains cluster diagnosis data by calling kubernetes API and matching the rules of matching keywords in resources and container syntax. See the architecture diagram for details.

The node detection needs to be installed on the host of the detected node.



  • Kubeeye reviews your workload yaml specification against industry best practices to help you stabilize your cluster.
  • Kubeeye can find the problems of your cluster control plane, including Kube apiserver / Kube Controller Manager / etcd, etc.
  • Kubeeye can help you detect various node problems, including memory / CPU / disk pressure, unexpected kernel error logs, etc.

Inspection items

Yes / no Inspection items describe level
PrivilegeEscalationAllowed Allow privilege escalation urgent
CanImpersonateUser Role / clusterrole has permission to disguise as other users warning
CanDeleteResources Role / clusterrole has permission to delete kubernetes resources warning
CanModifyWorkloads Role / clusterrole has permission to modify kubernetes resources warning
NoCPULimits The resource has no CPU usage limit set urgent
NoCPURequests No reserved CPU is set for the resource urgent
HighRiskCapabilities High risk functions are enabled, such as all / sys_ ADMIN/NET_ ADMIN urgent
HostIPCAllowed The host IPC is turned on urgent
HostNetworkAllowed The host network is turned on urgent
HostPIDAllowed The host PID is turned on urgent
HostPortAllowed Host port opened urgent
ImagePullPolicyNotAlways The mirror pull policy is not always warning
ImageTagIsLatest The mirror label is latest warning
ImageTagMiss Mirror has no label urgent
InsecureCapabilities Unsafe functions are enabled, such as kill / sys_ CHROOT/CHOWN warning
NoLivenessProbe Survival status check is not set warning
NoMemoryLimits The resource has no memory usage limit set urgent
NoMemoryRequests No reserved memory is set for the resource urgent
NoPriorityClassName Resource scheduling priority is not set notice
PrivilegedAllowed Running resources in privileged mode urgent
NoReadinessProbe Readiness check is not set warning
NotReadOnlyRootFilesystem The root file system is not set to be read-only warning
NotRunAsNonRoot There is no setting to prohibit starting processes as root warning
CertificateExpiredPeriod API server certificate will be checked for expiration date less than 30 days urgent
EventAudit Event check warning
NodeStatus Node status check warning
DockerStatus Docker status check warning
KubeletStatus Kubelet status check warning


Kubeeye itself is written in golang, and the compiled binary executable file can be used to install relevant components.


Binary installation

tar -zxvf kubeeye-0.3.0-linux-amd64.tar.gz
mv kubeeye /usr/bin/

Source code compilation and installation

git clone
cd kubeeye 
make installke

Install NPD

For the detection of cluster node hosts, kubeeye adoptsNode-problem-Detector, it needs to be installed on the node host node. Kubeeye encapsulates the installation command and can be installed with one click.

⚠ Note: this will install NPD on your cluster, which is only required if you want a detailed node report.

[[email protected] ~]# kubeeye install -e npd
kube-system 	 ConfigMap 	 node-problem-detector-config 	 created
kube-system 	 DaemonSet 	 node-problem-detector 	 created

It mainly creates configmap and node problem detector daemon set of node problem detector config in Kube system namespace.

Running kubeeye in a cluster

In addition to the one-time use of tools, kubeeye is also an operator, which can run inside the cluster for long-term and continuous detection of the cluster.

Deploy kubeeye in kubernetes

kubectl apply -f
kubectl apply -f

View kubeeye patrol inspection results

$ kubectl get clusterinsight -o yaml

apiVersion: v1
- apiVersion:
  kind: ClusterInsight
    name: clusterinsight-sample
    namespace: default
    auditPeriod: 24h
      - resourcesType: Node
        - namespace: ""
          - items:
            - level: waring
              message: KubeletHasNoSufficientMemory
              reason: kubelet has no sufficient memory available
            - level: waring
              message: KubeletHasNoSufficientPID
              reason: kubelet has no sufficient PID available
            - level: waring
              message: KubeletHasDiskPressure
              reason: kubelet has disk pressure
            name: kubeeyeNode


Command options

[[email protected] ~]# kubeeye -h
KubeEye finds various problems on Kubernetes cluster.

  ke [command]

Available Commands:
  audit       audit resources from the cluster
  completion  generate the autocompletion script for the specified shell
  help        Help about any command
  install     A brief description of your command
  uninstall   A brief description of your command

  -f, --config string         Specify the path of kubeconfig.
  -h, --help                  help for ke
      --kubeconfig string     Paths to a kubeconfig. Only required if out-of-cluster.
      --master --kubeconfig   (Deprecated: switch to --kubeconfig) The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.

It can be seen that kubeeye currently mainly supports two commands, one is install package, such as NPD, and the other is to execute audit to scan the configuration of cluster applications.


[[email protected] ~]# kubeeye audit
KIND         NAMESPACE         NAME                                           MESSAGE
Deployment   dddd              jenkins-1644220286                             [NoCPULimits ImagePullPolicyNotAlways NoMemoryLimits NoPriorityClassName NotReadOnlyRootFilesystem NotRunAsNonRoot]
Deployment   jenkins           jenkins-1644220286                             [NoCPULimits ImagePullPolicyNotAlways NoMemoryLimits NoPriorityClassName NotReadOnlyRootFilesystem NotRunAsNonRoot]
Deployment   smartkm-api-k8s   velero                                         [ImageTagIsLatest NoLivenessProbe NoPriorityClassName NotReadOnlyRootFilesystem NoReadinessProbe NotRunAsNonRoot]
DaemonSet    smartkm-api-k8s   restic                                         [ImageTagIsLatest NoLivenessProbe NoPriorityClassName NotReadOnlyRootFilesystem NoReadinessProbe NotRunAsNonRoot]
Node                           minikube                                       [KernelHasNoDeadlock FilesystemIsNotReadOnly KubeletHasSufficientMemory KubeletHasNoDiskPressure KubeletHasSufficientPID]
Event        kube-system       node-problem-detector-dmsws.16d844532f662318   [Failed to pull image "": rpc error: code = Unknown desc = Error response from daemon: Get net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)]
Event        kube-system       node-problem-detector-dmsws.16d844532f66703e   [Error: ErrImagePull]
Event        kube-system       node-problem-detector-dmsws.16d84453351b8b19   [Error: ImagePullBackOff]

Add custom check rule

We use the command to view the predefined OPA check rules.

kubectl get cm -n kube-system node-problem-detector-config -oyaml

At the same time, you can also create custom inspection rules according to your own business.

  • Create OPA rule storage directory
mkdir opa
  • Add custom OPA rule file

Note: to check the OPA rule set for the workload, the package name must bekubeeye_workloads_regoTo check the OPA rule set by RBAC, the package name must bekubeeye_RBAC_regoThe package name must be the OPA rule set for the check nodekubeeye_nodes_rego

  • The following are the rules for checking the address of the mirror warehouse. Save the following rules to the rule fileimageRegistryRule.rego
package kubeeye_workloads_rego

deny[msg] {
    resource := input
    type := resource.Object.kind
    resourcename :=
    resourcenamespace := resource.Object.metadata.namespace
    workloadsType := {"Deployment","ReplicaSet","DaemonSet","StatefulSet","Job"}

    not workloadsImageRegistryRule(resource)

    msg := {
        "Name": sprintf("%v", [resourcename]),
        "Namespace": sprintf("%v", [resourcenamespace]),
        "Type": sprintf("%v", [type]),
        "Message": "ImageRegistryNotmyregistry"

workloadsImageRegistryRule(resource) {
    regex.match("^myregistry.public.kubesphere/basic/.+", resource.Object.spec.template.spec.containers[_].image)
  • Run kubeeye with additional rules

Tip: kubeeye will read all files in the specified directory.regoEnd of file

kubeeye audit -p ./opa


  • NPD installation is abnormal. K8s is used by default gcr. IO, if the installation server cannot connect to the public network, you can use my image warehouse: 1832990 / node problem detector: v0 8.7。
  • Kubeye installation uses the default host$HOME/.kube/configFile. If the k8s config file does not exist, it cannot run normally.

Reference link

This article is composed of blog one article multi posting platformOpenWriterelease!