I’ll show you something! Lightest kubernetes distribution ever

Time:2021-5-4

I'll show you something! Lightest kubernetes distribution ever

We all know that kubernetes is a container choreography platform that can be used to manage our container clusters. But if we only use it as learning, kubernetes is a bit too heavy. Some people’s native estimate is that there is no way to run a complete cluster environment with three instances (one master, two agents). Although there are deployment methods of using vagrant and machine on the Internet, the use and configuration are relatively complex. K3s is to solve the above problems, came into being.

Project introduction

First of all, we need to understand the applicable scenarios and functional features of the project!

Users who know or have used kubernetes must have heard of rancher, an open source product. It is also an open source enterprise kubernetes management platform, which can install and manage kubernetes clusters excellently. K3s, the lightweight kubernetes distribution, was also created and maintained by the company.

K3s is also a fully CNCF certified kubernetes distribution, which means that we can write yaml to operate the full version of kubernetes, and they will also be suitable for k3s cluster. Moreover, it fully implements all the API interfaces provided by kubernetes. We can operate kubernetes freely through the interface. The purpose of creating k3s project is to create a very, very lightweight kubernetes distribution, which is mainly applicable to the following aspects:

  • Edge
  • IoT
  • CI
  • Development
  • ARM
  • Embedding K8s
  • Situations where a PhD in K8s clusterology is infeasible

K3s packages everything you need to install kubernetes into a binary file of only xxmb in size. In addition, in order to reduce the memory needed to run k8s, many unnecessary drivers are deleted and replaced with additional components. In this way, it only needs very low resources to run, and the installation time is also very short, so it can run on devices such as raspberry pie, that is, the mode of master and agent running together.

  • Cutting function
  • Obsolete and non default features
  • Obsolete features and non default features alpha features
  • Outdated features and non default features built in cloud provider plug-ins
  • Obsolete features and non default features built in storage drivers
  • Obsolete features and non default features docker

I'll show you something! Lightest kubernetes distribution ever

  • Project features
  • Use SQLite as the default data store instead of etcd, but etcd is still supported
  • Built in local storage provider, service load balancer, etc
  • All k8s control components, such as API server and scheduler, are encapsulated into a simplified binary program, which can run in a single process
  • Remove built-in plug-ins, such as cloudprovider plug-ins and storage plug-ins
  • To reduce external dependence, the operating system only needs to install a newer kernel and support CGroup

I'll show you something! Lightest kubernetes distribution ever

  • Shortcomings
  • Because in high availability scenarios, it is impossible or difficult to achieve. So if you want to do a large cluster deployment, then I suggest you choose k8s to install and deploy. If you are in a small deployment scenario such as edge computing or just need to deploy some non core clusters for development / testing, then k3s is a more cost-effective choice.
  • In k3s of a single master, SQLite database is used by default to store data, which is very friendly for small databases. However, SQLite will become the main pain point if it is hit hard. However, the changes in kubernetes control plane are more related to frequent update deployment, scheduling pod and so on. Therefore, for a small development / test cluster, the database will not cause too much load.

Of course, if you want to learn k8s, but do not want to toss the tedious installation and deployment of k8s, you can use k3s instead of k8s. K3s includes all the basic functions of k8s, and the additional functions of k8s are not available in most cases.

# This won't take long ...
$ curl -sfL https://get.k3s.io | sh -
# Check for Ready node, takes maybe 30 seconds
$ k3s kubectl get node

Project structure

The figure below is the diagram of the project structure provided on the official website!

The k3s installation package already contains the components of containerd, flannel and coredns. It is very convenient to install with one click, and there is no need to install additional components such as docker and flannel.

Architecture

I'll show you something! Lightest kubernetes distribution ever

Single-server Setup with an Embedded DB

I'll show you something! Lightest kubernetes distribution ever

High-Availability K3s Server with an External DB

I'll show you something! Lightest kubernetes distribution ever

Fixed Registration Address for Agent Nodes

I'll show you something! Lightest kubernetes distribution ever

Installation method

Installation, so simple!

Quick use = > using installation script

#   Deployment of a k3s single node environment (all)   in   one)
#   The installation script can register k3s in SYSTEMd or openrc and actually run as a service
$ curl -sfL https://get.k3s.io | sh -
#   After the installation, the corresponding command can be executed
#   Kubeconfig configuration file / etc / Ranger / k3s / k3s.yaml
# kubectl、crictl、k3s-killall.sh、k3s-uninstall.sh
$ sudo kubectl get nodes
#   Add more nodes
#   K3S_ URL:   The URL address of the API server service
#   k3S_ TOKEN:   Register token string for node
#   K3S_ TOKEN:   In the / var / lib / Ranger / k3s / server / node token path of the master node
$ curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -

Source code installation = > use binary package

#   Download k3s binary package
https://github.com/rancher/k3s/releases/latest
#   Run the master node service (/ etc / Ranger / k3s / k3s. Yaml)
$ sudo k3s server &
$ sudo k3s kubectl get nodes
#   Add node information to the master node on another machine
$ sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}

Usage

Use k3s command just like k8s!

After k3s is installed, a sub command of kubectl is built in. We call it by executing k3s kubectl command. Its function and usage are consistent with k8s kubectl command. For our more convenient use, we can set an alias or create a soft connection to achieve the seamless use of the command.

#   Create alias
$ alias kubectl='k3s kubectl'
#   Create a soft connection
$ ln -sf /usr/bin/kubectl /usr/local/bin/k3s
#   Configure kubectl command completion
$ source <(kubectl completion bash)

After the configuration is complete, you can use kubectl to operate the cluster machine. You can view the list of pods running in the cube system namespace by running the following command. We found that we did not run apiserver, scheduler, Kube proxy, flannel and other components, because they have been embedded in the k3s process. In addition, k3s has given us default deployment to run services such as tracefix empress and metrics server, so we don’t need to install them any more.

#   View the list of pods that the Kube system is running
$ kubectl get pod -n kube-system
NAME                                      READY   STATUS      RESTARTS   AGE
metrics-server-6d123c7b5-4qppl            1/1     Running     0          70m
local-path-provisioner-58f123bdfd-8l4hn   1/1     Running     0          70m
helm-install-traefik-pltbs                1/1     Running     0          70m
coredns-6c62348b64-b9qcl                  1/1     Running     0          70m
svclb-traefik-223g2                       2/2     Running     0          70m
traefik-7b81234c8-xk237                   1/1     Running     0          70m

K3s does not use docker as the running environment of container by default, but uses the built-in contained, which can interact with CRI by using crictl subcommand. Of course, we can also create alias to achieve seamless use of commands.

#   Create alias
$ alias docker='k3s crictl'
#   Complete the configuration docker command
$ source <(docker completion)
$ complete -F _cli_bash_autocomplete docker

In this way, we can use the docker command to view the containers running on the machine. We found that there are more fields such as attempt and pod ID in the following command output, which are unique to CRI, but the real docker command does not.

#   View the running container through docker
$ docker  ps
CONTAINER    IMAGE      CREATED    STATE      NAME              ATTEMPT    POD ID
d8a...5      aa7...1    1min       Running    traefik           0          799...c
1ec...f      897...f    1min       Running    lb-port-443       0          457...d
021...1      897...f    1min       Running    lb-port-80        0          407...d
089...0      c4d...b    1min       Running    coredns           0          423...d
ac0...0      9dd...1    1min       Running    metrics-server    0          f6f...6

After installing and configuring the service, we need to know the following (better with K9s!):

  • network
  • Because k3s has built-in traefik component, you don’t need to install ingress controller separately. You can create ingress directly. Among them, 192.168.xxx.xxx is the IP of the master node. Since we have no DNS resolution, we can configure it statically by configuring the / etc / hosts file, and then we can access our services through the domain name.
  • network
  • Because k3s has built-in flannel network plug-in, vxlan back end is used by default, and the default IP segment is 10.42.0.0/16. Besides vxlan, the built-in flannel also supports IPSec, host GW and wireguard. Of course, in addition to the default flannel, k3s also supports other CNI, such as canal and calico.
  • storage
  • K3s removed the built-in cloud provider and storage plug-in of k8s, and built-in local path provider to provide storage. The built-in local path storage can only be used on a single machine, does not support cross host use, and does not support the high availability of storage. The k3s storage problem can be solved by using external storage plug-ins, such as Longhorn cloud native distributed block storage system.

I'll show you something! Lightest kubernetes distribution ever

I'll show you something! Lightest kubernetes distribution ever

Author: Escape
Link:https://www.escapelife.site/p…
I'll show you something! Lightest kubernetes distribution ever

Recommended Today

Large scale distributed storage system: Principle Analysis and architecture practice.pdf

Focus on “Java back end technology stack” Reply to “interview” for full interview information Distributed storage system, which stores data in multiple independent devices. Traditional network storage system uses centralized storage server to store all data. Storage server becomes the bottleneck of system performance and the focus of reliability and security, which can not meet […]