Use sealos to quickly build a function computing platform based on mongo minio on openebs, a must for fishing

Time:2022-8-17

The author is developing laf (https://github.com/lafjs/laf) in the process of relying on the mongo minio components, this article will introduce how to best practices for these components.

By the way, I would like to mention laf, a function computing platform that is as simple as writing code like writing a blog. After writing the code, click to publish, shut down and leave, what docker, k8s, CI/CD, what do I care about as a business writer~ Laf is a The framework forced out by the business makes the front end a full stack in seconds.

Life is short, you need laf 🙂

Use sealos to quickly build a function computing platform based on mongo minio on openebs, a must for fishing

Laf relies on mongo minio ingress, and in order to make the whole thing more cloud-native, we introduce openebs to manage storage.

| Best Partner

Use sealos to quickly build a function computing platform based on mongo minio on openebs, a must for fishing

sealosNever let users suffer, laf's needs, sealos only need:

sealos run \
   -e openebs-basedir=/data -e mongo-replicaCount=3 \
   fanux/kubernetes:v1.23.5 \
   fanux/openebs:latest \
   fanux/mongo:latest \
   laf-js/laf:latest \
   -m 192.168.0.2 -n 192.168.0.3

Then there is no and then, can you not like this kind of thing? We only need two environment variables to specify the storage directory and the number of mongo replicas. We know what simplicity the user wants. Of course, the best part is to make it simple for users without sacrificing functions. This is the road to simplicity. It is the most proud place of sealos.

| The workload is not full tutorial

Let's take a look at the painful life you need to go through without sealos. Of course, the following tutorial is very suitable for you to practice when the workload is not full. Of course, I recommend you to automate it with sealos, and then use the following document to tell the boss that you have done it. Many things, the boss is very happy, saying that this guy is really capable, and you are comfortable with pesticides for a day. . .

| install openebs

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml

openebs has many storage modes, block storage cStore and local PV, local directory storage, temporary storage, etc. Block storage is recommended for production environments. If the requirements are not so strict, local directory storage can be used, and temporary storage is only used for testing.

Create storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-hostpath
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /var/local-hostpath # Host path storage dir
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

The BasePath here configures which directory you want to store the data in.

use storage

Create PVCs

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-hostpath-pvc
spec:
  storageClassName: openebs-hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5G
kubectl get pvc local-hostpath-pvc
NAME                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS       AGE
local-hostpath-pvc   Pending                                      openebs-hostpath   3m7s

PVC used in containers:

apiVersion: v1
kind: Pod
metadata:
  name: hello-local-hostpath-pod
spec:
  volumes:
  - name: local-storage
    persistentVolumeClaim:
      claimName: local-hostpath-pvc
  containers:
  - name: hello-container
    image: busybox
    command:
       - sh
       - -c
       - 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'
    volumeMounts:
    - mountPath: /mnt/store
      name: local-storage
kubectl apply -f local-hostpath-pod.yaml
kubectl exec hello-local-hostpath-pod -- cat /mnt/store/greet.txt
kubectl get pvc local-hostpath-pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
local-hostpath-pvc   Bound    pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425   5G         RWO            openebs-hostpath   28m

Checkout the bound pvc
kubectl get pv pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 -o yaml

clean up

kubectl delete pod hello-local-hostpath-pod
kubectl delete pvc local-hostpath-pvc
kubectl delete sc local-hostpath

kubectl get pv

tracking data

Teach you how to query where the data is ultimately stored, so you can rest assured~

Get pod pvc name:

[[email protected] openebs]# kubectl get pod hello-local-hostpath-pod-4 -oyaml|grep claimName
      claimName: local-hostpath-pvc-4

get pv nodename and pvname

[[email protected] openebs]# kubectl get pvc local-hostpath-pvc-4 -oyaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
...
    volume.kubernetes.io/selected-node: iz2ze0qiwmjj4p5rncuhhoz
...
  name: local-hostpath-pvc-4
...
  storageClassName: local-hostpath
  volumeName: pvc-056c7781-c9b3-46f6-aa6e-a3a2d72456d6
...

We got the node name: iz2ze0qiwmjj4p5rncuhhoz

storageClass是: local-hostpath

View storageClass details:

[[email protected] openebs]# kubectl get sc local-hostpath -oyaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    cas.openebs.io/config: |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /data
    openebs.io/cas-type: local
...
provisioner: openebs.io/local
reclaimPolicy: Delete

So the data directory is /data.

The final location of the data is: iz2ze0qiwmjj4p5rncuhhoz:/data/pvc-056c7781-c9b3-46f6-aa6e-a3a2d72456d6 Go to ssh to view

[[email protected] openebs]# kubectl get node -owide
NAME                      STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
iz2ze0qiwmjj4p5rncuhhoz   Ready    <none>                 29h   v1.22.0   172.17.83.145   <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64   containerd://1.4.3

ssh [email protected]
[[email protected] pvc-056c7781-c9b3-46f6-aa6e-a3a2d72456d6]# cd /data/pvc-056c7781-c9b3-46f6-aa6e-a3a2d72456d6 && ls
greet.txt # rest assured

| mongo uses storage provided by openebs

git clone https://github.com/bitnami/charts

Configure values, here you need to configure the number of replicas, and the number of nodeports should be the same:

architecture=replicaset
replicaCount=3
externalAccess.enabled=true
externalAccess.service.type=NodePort
externalAccess.service.nodePorts[0]='31001'
externalAccess.service.nodePorts[1]='31002'
externalAccess.service.nodePorts[1]='31003'

Modify StorageClass:

storageClass: "local-hostpath"
[[email protected] mongodb]# cd bitnami/mongodb && helm install mongo-test .
NAME: mongo-test
LAST DEPLOYED: Tue Mar 29 16:18:08 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 11.1.3
APP VERSION: 4.4.13
** Please be patient while the chart is being deployed **

MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:

    mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc.cluster.local:27017
    mongo-test-mongodb-1.mongo-test-mongodb-headless.default.svc.cluster.local:27017
    mongo-test-mongodb-2.mongo-test-mongodb-headless.default.svc.cluster.local:27017
    mongo-test-mongodb-3.mongo-test-mongodb-headless.default.svc.cluster.local:27017
To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongo-test-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

To connect to your database, create a MongoDB&reg; client container:

    kubectl run --namespace default mongo-test-mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.13-debian-10-r25 --command -- bash

Then, run the following command:
    mongo admin --host "mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-1.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-2.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-3.mongo-test-mongodb-headless.default.svc.cluster.local:27017" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

To connect to your database nodes from outside, you need to add both primary and secondary nodes hostnames/IPs to your Mongo client. To obtain them, follow the instructions below:

    MongoDB&reg; nodes domain: you can reach MongoDB&reg; nodes on any of the K8s nodes external IPs.

        kubectl get nodes -o wide

    MongoDB&reg; nodes port: You will have a different node port for each MongoDB&reg; node. You can get the list of configured node ports using the command below:

        echo "$(kubectl get svc --namespace default -l "app.kubernetes.io/name=mongodb,app.kubernetes.io/instance=mongo-test,app.kubernetes.io/component=mongodb,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"

View pods:

[[email protected] mongodb]# kubectl get pod
NAME                           READY   STATUS      RESTARTS      AGE
mongo-test-mongodb-0           1/1     Running     0             49m
mongo-test-mongodb-1           1/1     Running     0             49m
mongo-test-mongodb-2           0/1     Running     1 (90s ago)   48m
mongo-test-mongodb-arbiter-0   1/1     Running     0  

Check that the pvc is bound:

[[email protected] mongodb]# kubectl get pvc
NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
datadir-mongo-test-mongodb-0   Bound    pvc-5bddcedc-eb0c-41ed-a230-f7c953bc537f   8Gi        RWO            local-hostpath     52m
datadir-mongo-test-mongodb-1   Bound    pvc-c187a64a-c3e6-4e4b-9669-c01e30af1dc7   8Gi        RWO            local-hostpath     51m
datadir-mongo-test-mongodb-2   Bound    pvc-b845673f-2297-40ed-b013-

Access mongo using client pod:

kubectl run --namespace default mongo-test-mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.13-debian-10-r25 --command -- bash

Run mongo cli:
 mongo admin --host "mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-1.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-2.mongo-test-mongodb-headless.default.svc.cluster.local:27017,mongo-test-mongodb-3.mongo-test-mongodb-headless.default.svc.cluster.local:27017" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

Implicit session: session { "id" : UUID("25ae50c1-932f-416d-b164-871c9144118d") }
MongoDB server version: 4.4.13
---
The server generated these startup warnings when booting: 
        2022-03-29T08:18:28.221+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
        2022-03-29T08:18:28.460+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
        2022-03-29T08:18:28.460+00:00: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. We suggest setting it to 'never'
---
---
        Enable MongoDB's free cloud-based monitoring service, which will then receive and display
        metrics about your deployment (disk utilization, CPU, operation statistics, etc).

        The monitoring data will be available on a MongoDB website with a unique URL accessible to you
        and anyone you share the URL with. MongoDB may use this information to make product
        improvements and to suggest MongoDB products and deployment options to you.

        To enable free monitoring, run the following command: db.enableFreeMonitoring()
        To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
rs0:PRIMARY>

rs0:PRIMARY> help
  db.help()                    help on db methods
  db.mycoll.help()             help on collection methods
  sh.help()                    sharding helpers
  rs.help()                    replica set helpers
  help admin                   administrative help
  help connect                 connecting to a db help
  help keys                    key shortcuts
  help misc                    misc things to know
  help mr                      mapreduce

  show dbs                     show database names
  show collections             show collections in current database
  show users                   show users in current database
  show profile                 show most recent system.profile entries with time >= 1ms
  show logs                    show the accessible logger names
  show log [name]              prints out the last segment of log in memory, 'global' is default
  use <db_name>                set current database
  db.mycoll.find()             list objects in collection mycoll
  db.mycoll.find( { a : 1 } )  list objects in mycoll where a == 1
  it                           result of the last line evaluated; use to further iterate
  DBQuery.shellBatchSize = x   set default number of items to display on shell
  exit                         quit the mongo shell
rs0:PRIMARY> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB

| Minio on openebs

Install the minio plugin (see no, the tools used by each household are different, helm operator plugin…):

wget https://github.com/minio/operator/releases/download/v4.4.4/kubectl-minio_4.4.4_linux_amd64 -O kubectl-minio
chmod +x kubectl-minio
mv kubectl-minio /usr/local/bin/

kubectl minio version
Install minio operator

kubectl minio init
kubectl get all --namespace minio-operator

Create minio Cluster
You can create it on the UI, I won't teach it if you are a fool:

kubectl minio proxy
Or create using helm chart:

# clone slow you can use proxy: git clone https://ghproxy.com/https://github.com/minio/operator/
git clone https://github.com/minio/operator
cd helm/tenant

Modify the storage class:

values.pools.servers[].storageClassName = 'local-hostpath'

Install the cluster the cluster:

helm install my-minio .
[[email protected] tenant]# kubectl get all -n test
NAME                  READY   STATUS    RESTARTS   AGE
pod/minio1-pool-0-0   1/1     Running   0          2m23s
pod/minio1-pool-0-1   1/1     Running   0          2m23s
pod/minio1-pool-0-2   1/1     Running   0          2m23s
pod/minio1-pool-0-3   1/1     Running   0          2m23s

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio            ClusterIP   10.99.196.151   <none>        443/TCP    2m23s
service/minio1-console   ClusterIP   10.105.201.40   <none>        9443/TCP   2m23s
service/minio1-hl        ClusterIP   None            <none>        9000/TCP   2m23s

NAME                             READY   AGE
statefulset.apps/minio1-pool-0   4/4     2m23s

common problem

DNS cannot be resolved

[[email protected] minio]# kubectl logs sealos-log-search-api-cb966fc87-5kmw9
2022/03/30 10:41:52 Error connecting to db: dial tcp: lookup sealos-log-hl-svc.default.svc.cluster.local on 10.96.0.10:53: no such host

There is a high probability that the host resolv.conf has a messy configuration, just modify it and restart coreDNS:

[[email protected] ~]# cat /etc/resolv.conf 
nameserver 100.100.2.136
nameserver 100.100.2.138

mini log pod won't start

Warning  FailedScheduling  4m29s  default-scheduler  0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
This is because the storageClass of the log uses the default one, you need to set a default value


kubectl patch storageclass local-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

[[email protected] ~]# kubectl get sc
NAME                       PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-hostpath (default)   openebs.io/local   Delete          WaitForFirstConsumer   false                  2d1h
minio-local-storage        openebs.io/local   Delete          WaitForFirstConsumer   false                  3h20m

| Summary

In fact, each component itself has been well packaged, and the practice is not too troublesome, but it is process-oriented when combined together. The entire cloud operating system is regarded as a whole, and it is not as out-of-the-box as Docker on a single machine. , and the technical solutions and dependencies used by each component will be different, requiring a higher-level abstraction to solve the problem.

Recommended Today

Start with Hotspot source code from Thread.start

native start0 is traced to the hotspot source code private void native start0(); The principle of native is to call JNI, and the convention of Hotspot source code is, usually one Xxx.java corresponds to one Xxx.c, Here are three examples: Java class Path relative to the OpenJDK source java.lang.Thread jdk/src/share/native/java/lang/Thread.c java.lang.String jdk/src/share/native/java/lang/String.c java.lang.System jdk/src/share/native/java/lang/System.c So […]