Earlier, we learned about the use of HPA resources on k8s. For review, please refer to:https://www.cnblogs.com/qiuhom-1874/p/14293237.html; Today, let’s talk about k8s package manager helm;
What is helm?
If we compare the resource list of k8s to the RPM package on CentOS, helm works like Yum; In short, helm is a package manager like yum. It can make it easy for us to deploy applications on k8s. We need to deploy some applications on k8s. We can use helm directly to complete one click deployment; With the helm tool, we don’t even need to write a list of resources; For helm, it just assigns the resource list required by the corresponding application to k8s through the template engine, and then sends it to k8s for application, so as to deploy the application to k8s; We call the application deployed on k8s release; That is, after the template resource list is rendered by the template engine, the one deployed to k8s is called a release; Where does the template file come from? Like RPM warehouse, the template files here also come from the warehouse. In short, helm warehouse is used to store the template list packaging files of various applications. We call this packaging file chart, that is, helm warehouse is also called chart warehouse, which is mainly used to store the packaging files of various applications; The main package files are chart.yaml, readme.md, templates directory, values.yaml; The chart.yaml file is mainly used to correspond to the metadata information of the application; Readme.md is mainly used to describe how to use and deploy the chart; The templates directory is used to store various resource template files; There is an important file notes.txt in the templates directory, which is also a template file. Its main function is to output the information about the successful installation of the corresponding chart to the user after rendering through the template engine, and tell the user how to use the corresponding chart; The vlues.yaml file is mainly used to store the default values of the chart template, which is not specified by the user. The values in its internal template are the values corresponding to values.yaml; Because the list of template resources is stored in chart, users can customize the value.yaml file to achieve the purpose of customizing chart by specifying custom value.yaml;
Tool installation of Helm
The deployment of helm 2 is a little troublesome. In the early days, helm2 was composed of two components, the first is helm, a command-line tool, and the second is tiller pod on k8s; Tiller is the server, which mainly accepts helm and sends it to chart, and then tiller contacts apiserver to deploy the corresponding chart; The current version of helm is 3.0 +. Helm3 simplifies the previous helm2 method, that is, helm no longer relies on the tiller component. It can directly interact with apiserver and deploy the corresponding chart to k8s; The premise of using helm3 is that the corresponding host can normally connect to the apiserver of k8s, and there is a kubectl command on the corresponding host, that is, the corresponding host must be able to use the kubectl command to manage the corresponding k8s cluster; The reason for this is that helm will use the authentication information of kubectl tool to interact with apiserver;
1、 Installation of helm3
Download binary package
[[email protected] ~]# mkdir helm
[[email protected] ~]# cd helm/
[[email protected] helm]# wget https://get.helm.sh/helm-v3.5.0-linux-amd64.tar.gz
--2021-01-20 21:10:33-- https://get.helm.sh/helm-v3.5.0-linux-amd64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.195.19.97, 2606:2800:11f:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.195.19.97|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12327633 (12M) [application/x-tar]
Saving to: ‘helm-v3.5.0-linux-amd64.tar.gz’
100%[==================================================================================================================================>] 12,327,633 9.17MB/s in 1.3s
2021-01-20 21:10:35 (9.17 MB/s) - ‘helm-v3.5.0-linux-amd64.tar.gz’ saved [12327633/12327633]
[[email protected] helm]#ls
helm-v3.5.0-linux-amd64.tar.gz
[[email protected] helm]
Unzip package
[[email protected] helm]# tar xf helm-v3.5.0-linux-amd64.tar.gz
[[email protected] helm]# ls
helm-v3.5.0-linux-amd64.tar.gz linux-amd64
[[email protected] helm]# cd linux-amd64/
[[email protected] linux-amd64]# ls
helm LICENSE README.md
[[email protected] linux-amd64]#
Copy the helm binary file to the path environment variable directory
[[email protected] linux-amd64]# cp helm /usr/bin/
[[email protected] linux-amd64]# hel
helm help
[[email protected] linux-amd64]# hel
2、 Use of Helm
View helm version
[[email protected] ~]# helm version
version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"clean", GoVersion:"go1.15.6"}
[[email protected] ~]#
View helm help
[[email protected] ~]# helm -h
The Kubernetes package manager
Common actions for Helm:
- helm search: search for charts
- helm pull: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts
Environment variables:
| Name | Description |
|------------------------------------|-----------------------------------------------------------------------------------|
| $HELM_CACHE_HOME | set an alternative location for storing cached files. |
| $HELM_CONFIG_HOME | set an alternative location for storing Helm configuration. |
| $HELM_DATA_HOME | set an alternative location for storing Helm data. |
| $HELM_DEBUG | indicate whether or not Helm is running in Debug mode |
| $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory, postgres |
| $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use. |
| $HELM_MAX_HISTORY | set the maximum number of helm release history. |
| $HELM_NAMESPACE | set the namespace used for the helm operations. |
| $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. |
| $HELM_PLUGINS | set the path to the plugins directory |
| $HELM_REGISTRY_CONFIG | set the path to the registry config file. |
| $HELM_REPOSITORY_CACHE | set the path to the repository cache directory |
| $HELM_REPOSITORY_CONFIG | set the path to the repositories file. |
| $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") |
| $HELM_KUBEAPISERVER | set the Kubernetes API Server Endpoint for authentication |
| $HELM_KUBECAFILE | set the Kubernetes certificate authority file. |
| $HELM_KUBEASGROUPS | set the Groups to use for impersonation using a comma-separated list. |
| $HELM_KUBEASUSER | set the Username to impersonate for the operation. |
| $HELM_KUBECONTEXT | set the name of the kubeconfig context. |
| $HELM_KUBETOKEN | set the Bearer KubeToken used for authentication. |
Helm stores cache, configuration, and data based on the following configuration order:
- If a HELM_*_HOME environment variable is set, it will be used
- Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used
- When no other location is set a default location will be used based on the operating system
By default, the default directories depend on the Operating System. The defaults are listed below:
| Operating System | Cache Path | Configuration Path | Data Path |
|------------------|---------------------------|--------------------------------|-------------------------|
| Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
| macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
| Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
Usage:
helm [command]
Available Commands:
completion generate autocompletion scripts for the specified shell
create create a new chart with the given name
dependency manage a chart's dependencies
env helm client environment information
get download extended information of a named release
help Help about any command
history fetch release history
install install a chart
lint examine a chart for possible issues
list list releases
package package a chart directory into a chart archive
plugin install, list, or uninstall Helm plugins
pull download a chart from a repository and (optionally) unpack it in local directory
repo add, list, remove, update, and index chart repositories
rollback roll back a release to a previous revision
search search for a keyword in charts
show show information of a chart
status display the status of the named release
template locally render templates
test run tests for a release
uninstall uninstall a release
upgrade upgrade a release
verify verify that a chart at the given path has been signed and is valid
version print the client version information
Flags:
--debug enable verbose output
-h, --help help for helm
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--kube-as-user string username to impersonate for the operation
--kube-ca-file string the certificate authority file for the Kubernetes API server connection
--kube-context string name of the kubeconfig context to use
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
-n, --namespace string namespace scope for this request
--registry-config string path to the registry config file (default "/root/.config/helm/registry.json")
--repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
Use "helm [command] --help" for more information about a command.
[[email protected] ~]#
View warehouse list
[[email protected] ~]# helm repo -h
This command consists of multiple subcommands to interact with chart repositories.
It can be used to add, remove, list, and index chart repositories.
Usage:
helm repo [command]
Available Commands:
add add a chart repository
index generate an index file given a directory containing packaged charts
list list chart repositories
remove remove one or more chart repositories
update update information of available charts locally from chart repositories
Flags:
-h, --help help for repo
Global Flags:
--debug enable verbose output
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--kube-as-user string username to impersonate for the operation
--kube-ca-file string the certificate authority file for the Kubernetes API server connection
--kube-context string name of the kubeconfig context to use
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
-n, --namespace string namespace scope for this request
--registry-config string path to the registry config file (default "/root/.config/helm/registry.json")
--repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
Use "helm repo [command] --help" for more information about a command.
[[email protected] ~]# helm repo list
Error: no repositories to show
[[email protected] ~]#
Tip: we don’t have a warehouse here;
Add warehouse
[[email protected] ~]# helm repo add stable https://charts.helm.sh/stable
"stable" has been added to your repositories
[[email protected] ~]# helm repo list
NAME URL
stable https://charts.helm.sh/stable
[r[email protected] ~]#
Tip: to add a warehouse, you need to connect to the corresponding warehouse. If your server cannot connect to the corresponding warehouse normally, please use an agent. The specific agent method is to use HTTPS on the corresponding shell terminal_ Proxy environment variable gives an available proxy address; Such as HTTPS_ PROXY=” http://www.ik8s.io:10080 “, when using proxy environment variables, you should pay attention to giving out the corresponding addresses that do not need proxy. For example, if the local address does not need proxy, you can use No_ PROXY=”127.0.0.0/8,192.168.0.0/24″; Otherwise, we use kubectl, which will proxy to the proxy address given by us;
Search chart
Tip: helm search repo means to list all charts in the added warehouse;
Search redis in the warehouse
[[email protected] ~]# helm search repo redis
NAME CHART VERSION APP VERSION DESCRIPTION
stable/prometheus-redis-exporter 3.5.1 1.3.4 DEPRECATED Prometheus exporter for Redis metrics
stable/redis 10.5.7 5.0.7 DEPRECATED Open source, advanced key-value stor...
stable/redis-ha 4.4.6 5.0.6 DEPRECATED - Highly available Kubernetes implem...
stable/sensu 0.2.5 0.28 DEPRECATED Sensu monitoring framework backed by...
[[email protected] ~]#
Install stable / redis
[[email protected] ~]# helm install redis-demo stable/redis
WARNING: This chart is deprecated
NAME: redis-demo
LAST DEPLOYED: Wed Jan 20 22:27:18 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
This Helm chart is deprecated
Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Redis Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).
The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/` instead of `stable/`)
```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/ # Helm 3
$ helm install --name my-release bitnami/ # Helm 2
```
To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute
```bash $ helm
repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/
```
Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.
** Please be patient while the chart is being deployed **
Redis can be accessed via port 6379 on the following DNS names from within your cluster:
redis-demo-master.default.svc.cluster.local for read/write operations
redis-demo-slave.default.svc.cluster.local for read-only operations
To get your password run:
export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-demo -o jsonpath="{.data.redis-password}" | base64 --decode)
To connect to your Redis server:
1. Run a Redis pod that you can use as a client:
kubectl run --namespace default redis-demo-client --rm --tty -i --restart='Never' \
--env REDIS_PASSWORD=$REDIS_PASSWORD \
--image docker.io/bitnami/redis:5.0.7-debian-10-r32 -- bash
2. Connect using the Redis CLI:
redis-cli -h redis-demo-master -a $REDIS_PASSWORD
redis-cli -h redis-demo-slave -a $REDIS_PASSWORD
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/redis-demo-master 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
[[email protected] ~]#
View release
[[email protected] ~]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
redis-demo default 1 2021-01-20 22:27:18.635916075 +0800 CST deployed redis-10.5.7 5.0.7
[[email protected] ~]#
Verification: use kubectl to check whether the corresponding redis demo on the k8s cluster is running?
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-779867bcfc-57zw7 1/1 Running 1 2d7h
myapp-779867bcfc-657qr 1/1 Running 1 2d7h
podinfo-56874dc7f8-5rb9q 1/1 Running 1 2d2h
podinfo-56874dc7f8-t6jgn 1/1 Running 1 2d2h
[[email protected] ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 11d
myapp-svc NodePort 10.111.14.219 80:31154/TCP 2d7h
podinfo NodePort 10.111.10.211 9898:31198/TCP 2d2h
redis-demo-headless ClusterIP None 6379/TCP 18m
redis-demo-master ClusterIP 10.100.228.32 6379/TCP 18m
redis-demo-slave ClusterIP 10.109.46.121 6379/TCP 18m
[[email protected] ~]# kubectl get sts
NAME READY AGE
redis-demo-master 0/1 18m
redis-demo-slave 0/2 18m
[[email protected] ~]#
Tip: check the list of pods with kubectl tool, and no corresponding pod is found running, but the corresponding SVC and STS are created normally;
Check the reason why the pod was not created
[[email protected] ~]# kubectl describe sts/redis-demo-master|grep -A 10 Events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 14m (x12 over 14m) statefulset-controller create Pod redis-demo-master-0 in StatefulSet redis-demo-master failed error: failed to create PVC redis-data-redis-demo-master-0: persistentvolumeclaims "redis-data-redis-demo-master-0" is forbidden: exceeded quota: quota-storage-demo, requested: requests.storage=8Gi, used: requests.storage=0, limited: requests.storage=5Gi
Warning FailedCreate 3m40s (x18 over 14m) statefulset-controller create Claim redis-data-redis-demo-master-0 for Pod redis-demo-master-0 in StatefulSet redis-demo-master failed error: persistentvolumeclaims "redis-data-redis-demo-master-0" is forbidden: exceeded quota: quota-storage-demo, requested: requests.storage=8Gi, used: requests.storage=0, limited: requests.storage=5Gi
[[email protected] ~]# kubectl describe sts/redis-demo-slave|grep -A 10 Events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 14m (x12 over 14m) statefulset-controller create Pod redis-demo-slave-0 in StatefulSet redis-demo-slave failed error: failed to create PVC redis-data-redis-demo-slave-0: persistentvolumeclaims "redis-data-redis-demo-slave-0" is forbidden: exceeded quota: quota-storage-demo, requested: requests.storage=8Gi, used: requests.storage=0, limited: requests.storage=5Gi
Warning FailedCreate 3m41s (x18 over 14m) statefulset-controller create Claim redis-data-redis-demo-slave-0 for Pod redis-demo-slave-0 in StatefulSet redis-demo-slave failed error: persistentvolumeclaims "redis-data-redis-demo-slave-0" is forbidden: exceeded quota: quota-storage-demo, requested: requests.storage=8Gi, used: requests.storage=0, limited: requests.storage=5Gi
[[email protected] ~]#
Tip: we are prompted here that we do not have permission to create because quota storage demo is prohibited;
View resource quota admission control rules
[[email protected] ~]# kubectl get resourcequota
NAME AGE REQUEST LIMIT
quota-storage-demo 19d persistentvolumeclaims: 0/5, requests.ephemeral-storage: 0/1Gi, requests.storage: 0/5Gi limits.ephemeral-storage: 0/2Gi
[[email protected] ~]# kubectl describe resourcequota quota-storage-demo
Name: quota-storage-demo
Namespace: default
Resource Used Hard
-------- ---- ----
limits.ephemeral-storage 0 2Gi
persistentvolumeclaims 0 5
requests.ephemeral-storage 0 1Gi
requests.storage 0 5Gi
[[email protected] ~]#
Note: the resource quota admission control clearly limits the total minimum limit for creating PVC to 5g, and the above redis needs 8g, so it does not meet the corresponding admission control rules, so the creation of PVC is rejected, resulting in the failure of pod creation;
Uninstall redis demo
[[email protected] ~]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
redis-demo default 1 2021-01-20 22:27:18.635916075 +0800 CST deployed redis-10.5.7 5.0.7
[[email protected] ~]# helm uninstall redis-demo
release "redis-demo" uninstalled
[[email protected] ~]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
[[email protected] ~]#
Delete resourcequota admission control
[[email protected] ~]# kubectl get resourcequota
NAME AGE REQUEST LIMIT
quota-storage-demo 19d persistentvolumeclaims: 0/5, requests.ephemeral-storage: 0/1Gi, requests.storage: 0/5Gi limits.ephemeral-storage: 0/2Gi
[[email protected] ~]# kubectl delete resourcequota/quota-storage-demo
resourcequota "quota-storage-demo" deleted
[[email protected] ~]# kubectl get resourcequota
No resources found in default namespace.
[[email protected] ~]#
Check PV, is there enough PV?
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 5Gi RWO,ROX,RWX Retain Bound kube-system/alertmanager 3d22h
nfs-pv-v2 5Gi RWO,ROX,RWX Retain Bound kube-system/prometheus-data-prometheus-0 3d22h
nfs-pv-v3 5Gi RWO,ROX,RWX Retain Available 3d22h
[[email protected] ~]#
Note: there is another PV not used, but the size is only 5g, which is not enough for redis;
Create PV
[[email protected] ~]# cat pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v4
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v4
server: 192.168.0.99
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v5
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v5
server: 192.168.0.99
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v6
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v6
server: 192.168.0.99
[[email protected] ~]# kubectl apply -f pv-demo.yaml
persistentvolume/nfs-pv-v4 created
persistentvolume/nfs-pv-v5 created
persistentvolume/nfs-pv-v6 created
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 5Gi RWO,ROX,RWX Retain Bound kube-system/alertmanager 3d22h
nfs-pv-v2 5Gi RWO,ROX,RWX Retain Bound kube-system/prometheus-data-prometheus-0 3d22h
nfs-pv-v3 5Gi RWO,ROX,RWX Retain Available 3d22h
nfs-pv-v4 10Gi RWO,ROX,RWX Retain Available 3s
nfs-pv-v5 10Gi RWO,ROX,RWX Retain Available 3s
nfs-pv-v6 10Gi RWO,ROX,RWX Retain Available 3s
[[email protected] ~]#
Reinstall redis
[[email protected] ~]# helm install redis-demo stable/redis
WARNING: This chart is deprecated
NAME: redis-demo
LAST DEPLOYED: Wed Jan 20 22:54:30 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
This Helm chart is deprecated
Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Redis Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).
The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/` instead of `stable/`)
```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/ # Helm 3
$ helm install --name my-release bitnami/ # Helm 2
```
To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute
```bash $ helm
repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/
```
Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.
** Please be patient while the chart is being deployed **
Redis can be accessed via port 6379 on the following DNS names from within your cluster:
redis-demo-master.default.svc.cluster.local for read/write operations
redis-demo-slave.default.svc.cluster.local for read-only operations
To get your password run:
export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-demo -o jsonpath="{.data.redis-password}" | base64 --decode)
To connect to your Redis server:
1. Run a Redis pod that you can use as a client:
kubectl run --namespace default redis-demo-client --rm --tty -i --restart='Never' \
--env REDIS_PASSWORD=$REDIS_PASSWORD \
--image docker.io/bitnami/redis:5.0.7-debian-10-r32 -- bash
2. Connect using the Redis CLI:
redis-cli -h redis-demo-master -a $REDIS_PASSWORD
redis-cli -h redis-demo-slave -a $REDIS_PASSWORD
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/redis-demo-master 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
[[email protected] ~]#
Use the kubectl tool again to check whether the corresponding pod is running normally?
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-779867bcfc-57zw7 1/1 Running 1 2d7h
myapp-779867bcfc-657qr 1/1 Running 1 2d7h
podinfo-56874dc7f8-5rb9q 1/1 Running 1 2d2h
podinfo-56874dc7f8-t6jgn 1/1 Running 1 2d2h
redis-demo-master-0 0/1 CrashLoopBackOff 4 2m33s
redis-demo-slave-0 0/1 CrashLoopBackOff 4 2m33s
[[email protected] ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-data-redis-demo-master-0 Bound nfs-pv-v4 10Gi RWO,ROX,RWX 2m39s
redis-data-redis-demo-slave-0 Bound nfs-pv-v6 10Gi RWO,ROX,RWX 2m39s
[[email protected] ~]#
Prompt: PVC is automatically created here, but the corresponding pod can be started normally;
View pod details
[[email protected] ~]# kubectl describe pod/redis-demo-master-0|grep -A 10 Events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m53s default-scheduler Successfully assigned default/redis-demo-master-0 to node01.k8s.org
Normal Pulling 6m51s kubelet Pulling image "docker.io/bitnami/redis:5.0.7-debian-10-r32"
Normal Pulled 6m33s kubelet Successfully pulled image "docker.io/bitnami/redis:5.0.7-debian-10-r32" in 18.056248477s
Normal Started 5m47s (x4 over 6m33s) kubelet Started container redis-demo
Normal Created 5m1s (x5 over 6m33s) kubelet Created container redis-demo
Normal Pulled 5m1s (x4 over 6m32s) kubelet Container image "docker.io/bitnami/redis:5.0.7-debian-10-r32" already present on machine
Warning BackOff 100s (x28 over 6m31s) kubelet Back-off restarting failed container
[[email protected] ~]# kubectl describe pod/redis-demo-slave-0|grep -A 10 Events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m58s (x2 over 6m58s) default-scheduler 0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 6m55s default-scheduler Successfully assigned default/redis-demo-slave-0 to node01.k8s.org
Normal Pulling 6m55s kubelet Pulling image "docker.io/bitnami/redis:5.0.7-debian-10-r32"
Normal Pulled 6m37s kubelet Successfully pulled image "docker.io/bitnami/redis:5.0.7-debian-10-r32" in 17.603521415s
Normal Created 5m12s (x5 over 6m37s) kubelet Created container redis-demo
Normal Started 5m12s (x5 over 6m37s) kubelet Started container redis-demo
Normal Pulled 5m12s (x4 over 6m36s) kubelet Container image "docker.io/bitnami/redis:5.0.7-debian-10-r32" already present on machine
Warning BackOff 106s (x27 over 6m35s) kubelet Back-off restarting failed container
[[email protected] ~]#
Tip: you can view the details of the corresponding pod here, but there is no clear indication of any error; In a word, pod failed to operate normally (it is estimated that it is related to the corresponding image startup); Through the above experiments, although pod can not run normally, helm can submit the corresponding chart to k8s for operation; Helm’s mission is successful;
Uninstall redis demo and try installing chart again
Tip: search for redis in the stable warehouse here. All redis charts in the warehouse are obsolete versions;
Delete warehouse and add warehouse again
[[email protected] ~]# helm repo list
NAME URL
stable https://charts.helm.sh/stable
[[email protected] ~]# helm repo remove stable
"stable" has been removed from your repositories
[[email protected] ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
[[email protected] ~]# helm repo list
NAME URL
bitnami https://charts.bitnami.com/bitnami
[[email protected] ~]#
Search redis chart
[[email protected] ~]# helm search repo redis
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/redis 12.6.2 6.0.10 Open source, advanced key-value store. It is of...
bitnami/redis-cluster 4.2.6 6.0.10 Open source, advanced key-value store. It is of...
[[email protected] ~]#
Install bitnami / redis
[[email protected] ~]# helm install redis-demo bitnami/redis
NAME: redis-demo
LAST DEPLOYED: Thu Jan 21 01:58:18 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
Redis can be accessed via port 6379 on the following DNS names from within your cluster:
redis-demo-master.default.svc.cluster.local for read/write operations
redis-demo-slave.default.svc.cluster.local for read-only operations
To get your password run:
export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-demo -o jsonpath="{.data.redis-password}" | base64 --decode)
To connect to your Redis(TM) server:
1. Run a Redis(TM) pod that you can use as a client:
kubectl run --namespace default redis-demo-client --rm --tty -i --restart='Never' \
--env REDIS_PASSWORD=$REDIS_PASSWORD \
--image docker.io/bitnami/redis:6.0.10-debian-10-r1 -- bash
2. Connect using the Redis(TM) CLI:
redis-cli -h redis-demo-master -a $REDIS_PASSWORD
redis-cli -h redis-demo-slave -a $REDIS_PASSWORD
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/redis-demo-master 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
[[email protected] ~]#
Check the operation of pod
Tip: we are prompted here that the append only file does not have the permission to open, indicating that the corresponding storage we mounted does not have the write permission;
Add write permissions to back-end storage
Tip: the pod corresponding to the write permission here still fails to run normally; Delete the pod and try to see if the corresponding pod will run normally after reconstruction?
[[email protected] ~]# kubectl delete pod --all
pod "redis-demo-master-0" deleted
pod "redis-demo-slave-0" deleted
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-demo-master-0 0/1 ContainerCreating 0 3s
redis-demo-slave-0 0/1 Running 0 3s
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-demo-master-0 0/1 Running 0 5s
redis-demo-slave-0 0/1 Running 0 5s
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-demo-master-0 1/1 Running 0 62s
redis-demo-slave-0 1/1 Running 0 62s
redis-demo-slave-1 0/1 CrashLoopBackOff 2 26s
[[email protected] ~]#
Tip: after deleting the pod here, the newly created pod can run normally; However, there is another slave running failure, which should be caused by the lack of write permission in the back-end storage;
Add write permission to the back-end storage again
Tip: you can see that the corresponding directory is added with write permission, and the corresponding pod is started normally;
Enter the redis master-slave replication cluster
Tip: you can see the information corresponding to two slave nodes on the master node;
Verification: write data on the master node to see if the corresponding slave node can synchronize data?
Tip: you can see that the master side writes data, the slave side can synchronize the corresponding data normally, and the slave side can access the corresponding data normally, indicating that the master-slave replication cluster works normally;
Update warehouse
[[email protected] ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
[[email protected] ~]#
Tip: it is recommended to update the warehouse every time you deploy a new application, and then deploy the application;
Deploy apps using custom information
Tip: the above command can use the — set option to transfer custom information into the corresponding chart to replace the values in the corresponding template file; The above command indicates that the redis password is set to admin123.com, and the persistent storage function is not enabled for both master and slave (not recommended for production environments); Of course, you can use — set to specify individual parameters simply. If the parameters are too complex, it is recommended to use the value.yaml file instead, and use the — value option to specify the corresponding value file;