Git + Jenkins + harbor + k8s (kubernetes) for automatic deployment

Time:2021-5-1

System environment:

centos 7 

git:gitee.com Of course, any git server will do

Jenkins: lts version, deployed on the server, not deployed in k8s cluster

Harbor: offline version, used to store docker image

Kubernetes cluster is built in kubedm mode for convenience. There are three kubernetes clusters, and IPVS is enabled. The specific server usage is described as follows:

 
HOSTNAME IP address Server usage
master.test.cn 192.168.184.31 k8s-master
node1.test.cn 192.168.184.32 k8s-node1
node2.test.cn 192.168.184.33 k8s-node2
soft.test.cn 192.168.184.34 harbor、jenkins

 

1、 Kubernetes building

Reference for this node: https://www.cnblogs.com/lovesKey/p/10888006.html

1.1 system configuration

First, write hosts for 4 hosts

 [[email protected] ~]# cat /etc/hosts
 192.168.184.31    master.test.cn
 192.168.184.32    node1.test.cn fanli.test.cn
 192.168.184.33    node2.test.cn
 192.168.184.34    soft.test.cn jenkins.test.cn harbor.test.cn

 

Close swap:
Temporarily Closed

swapoff -a

Permanently close (delete or comment the line swap, and restart)

vim /etc/fstab

Close all firewalls

systemctl stop firewalld
systemctl disable firewalld

Disable SELinux:

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

Restart SELinux to take effect

Transfer the bridged IPv4 traffic to the iptables chain to make the setting take effect

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOFsysctl --system

1.2 preconditions for Kube proxy to open IPVS

Since IPVS has been added to the kernel backbone, the following kernel modules need to be loaded before opening IPVS for Kube proxy:

ip_vs

ip_vs_rr

ip_vs_wrr

ip_vs_sh

nf_conntrack_ipv4

Execute the following script on all kubernetes nodes master, node1 and node2:

Cat > / etc / sysconfig / modules / IPVS. Modules < you can see that it has taken effect:
nf_conntrack_ipv4      15053  5 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  16 
ip_vs                 145497  22 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

The / etc / sysconfig / modules / ipvs.modules file created by the script ensures that the required modules can be loaded automatically after the node is restarted. Using lsmod | grep – e IP_ vs -e nf_ conntrack_ The IPv4 command checks to see if the required kernel modules have been loaded correctly.

Install ipset package on all nodes, and install ipvsadm (optional) for viewing IPVS rules

 yum install ipset ipvsadm

1.3 install docker (all nodes)

Configure docker domestic source (alicloud)

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

Install the latest version of docker CE

yum -y install docker-ce
systemctl enable docker && systemctl start docker
docker --version

Configure the source of kubernetes (alicloud)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Manually import gpgkey or close it

gpgcheck=0

 

 

rpmkeys --import https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpmkeys --import https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

 

To start installing kubedm and kubelet:

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet
systemctl start kubelet

 

1.4 start deploying kubernetes

Initialize master

kubeadm init \
--apiserver-advertise-address=192.168.184.31 \
--image-repository lank8s.cn \
--kubernetes-version v1.18.6 \
--pod-network-cidr=10.244.0.0/16

 

Focus on output

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.184.31:6443 --token a9vg9z.dlboqvfuwwzauufq \
    --discovery-token-ca-cert-hash sha256:c2ade88a856f15de80240ff4994661a6daa668113cea0c4a4073f701f05192cb

Execute the following command to initialize the current user configuration, which will be used in kubectl   Master node execution:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install pod network plug-in

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Execute the following join command on each node (join the cluster)

kubeadm join 192.168.233.251:6443 --token a9vg9z.dlboqvfuwwzauufq --discovery-token-ca-cert-hash sha256:c2ade88a856f15de80240ff4994661a6daa668113cea0c4a4073f701f05192cb

 

Use kubectl get po – A – O wide to make sure all pods are running.

[[email protected] ~]# kubectl get po -A -o wide
NAMESPACE       NAME                                        READY   STATUS      RESTARTS   AGE    IP               NODE             NOMINATED NODE   READINESS GATES
kube-system     coredns-5c579bbb7b-flzkd                    1/1     Running     5          2d     10.244.1.14      node1.test.cn               
kube-system     coredns-5c579bbb7b-qz5m8                    1/1     Running     4          2d     10.244.2.9       node2.test.cn               
kube-system     etcd-master.test.cn                         1/1     Running     5          2d     192.168.184.31   master.test.cn              
kube-system     kube-apiserver-master.test.cn               1/1     Running     5          2d     192.168.184.31   master.test.cn              
kube-system     kube-controller-manager-master.test.cn      1/1     Running     5          2d     192.168.184.31   master.test.cn              
kube-system     kube-flannel-ds-amd64-bhmps                 1/1     Running     6          2d     192.168.184.33   node2.test.cn               
kube-system     kube-flannel-ds-amd64-mbpvb                 1/1     Running     6          2d     192.168.184.32   node1.test.cn               
kube-system     kube-flannel-ds-amd64-xnw2l                 1/1     Running     6          2d     192.168.184.31   master.test.cn              
kube-system     kube-proxy-8nkgs                            1/1     Running     6          2d     192.168.184.32   node1.test.cn               
kube-system     kube-proxy-jxtfk                            1/1     Running     4          2d     192.168.184.31   master.test.cn              
kube-system     kube-proxy-ls7xg                            1/1     Running     4          2d     192.168.184.33   node2.test.cn               
kube-system     kube-scheduler-master.test.cn               1/1     Running     4          2d     192.168.184.31   master.test.cn

Kube proxy opens IPVS

#Modify the config.conf in the Kube system / Kube proxy of configmap, change the mode: "to mode:" IPVS ", save and exit
[[email protected] centos]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
###Delete the previous proxy pod
[[email protected] centos]# kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-2m5jh" deleted
pod "kube-proxy-nfzfl" deleted
pod "kube-proxy-shxdt" deleted
#View the running status of proxy
[[email protected] centos]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-54qnw                              1/1     Running   0          24s
kube-proxy-bzssq                              1/1     Running   0          14s
kube-proxy-cvlcm                              1/1     Running   0          37s
#Check the log. If there is' using IPVS proxy. 'it means that the IPVS of Kube proxy is successfully opened!
[[email protected] centos]# kubectl logs kube-proxy-54qnw -n kube-system
I0518 20:24:09.319160       1 server_others.go:176] Using ipvs Proxier.
W0518 20:24:09.319751       1 proxier.go:386] IPVS scheduler not specified, use rr by default
I0518 20:24:09.320035       1 server.go:562] Version: v1.14.2
I0518 20:24:09.334372       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0518 20:24:09.334853       1 config.go:102] Starting endpoints config controller
I0518 20:24:09.334916       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0518 20:24:09.334945       1 config.go:202] Starting service config controller
I0518 20:24:09.334976       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0518 20:24:09.435153       1 controller_utils.go:1034] Caches are synced for service config controller
I0518 20:24:09.435271       1 controller_utils.go:1034] Caches are synced for endpoints config controller

Check whether the node is ready

[[email protected] ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master.test.cn Ready master 2d v1.18.6
node1.test.cn Ready 2d v1.18.6
node2.test.cn Ready 2d v1.18.6

 

So far, k8s has been successfully installed in kubedm mode and IPVS scheme

2、 Harbor installation

2.1 preparation

Gabor is started through docker compose. First of all, we need tosoft.test.cnInstall docker compose on node

curl -L https://github.com/docker/compose/releases/download/1.26.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose 

 

2.2 download harbor

Official address of harbor: https://github.com/goharbor/harbor/releases

Then install according to the official installation document https://github.com/goharbor/harbor/blob/master/docs/install-config/_ index.md

For the setting method here, I pasted my own domain name. If I actually operate, I need to replace harbor.test.cn with your own domain name

 

Download offline package

wget https://github.com/goharbor/harbor/releases/download/v1.10.4/harbor-offline-installer-v1.10.4.tgz

 

Unzip the installation package

[[email protected] ~]# tar zxvf harbor-offline-installer-v1.10.4.tgz

 

2.3 setting up HTTPS

Generate CA certificate private key

openssl genrsa -out ca.key 4096

Generate CA certificate

openssl req -x509 -new -nodes -sha512 -days 3650 \
 -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.test.cn" \
 -key ca.key \
 -out ca.crt

2.4 generating server certificate

Certificates usually contain files and files, such as.crt.key yourdomain.com.crt yourdomain.com.key

Generate private key

openssl genrsa -out harbor.test.cn.key 4096

 

Generate certificate signing request (CSR)

Adjust the values in the options to reflect your organization. If you use FQDN to connect to the port host, you must specify it as the public name () attribute and use it in the key and CSR file name.-subj CN

openssl req -sha512 -new \
    -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.test.cn" \
    -key harbor.test.cn.key \
    -out harbor.test.cn.csr

 

Generate x509 V3 extension file

Whether you use FQDN or IP address to connect to the port host, you must create this file so that certificates can be generated for port hosts that meet the subject alternative name (SAN) and x509 V3 extension requirements. Replace the entry to reflect your domain.

cat > v3.ext <

Use this file to generate certificates for port hosts

Replace the in CRS and CRT file names with the harbor host name

openssl x509 -req -sha512 -days 3650 \
    -extfile v3.ext \
    -CA ca.crt -CAkey ca.key -CAcreateserial \
    -in harbor.test.cn.csr \
    -out harbor.test.cn.crt

 

 

two point five   Provide certificates to harbor and docker

After generating ca.crt, yourdomain.com.crt and yourdomain.com.key files, you must provide them to harbor and docker, and reconfigure harbor to use them

Copy the server certificate and key to the certificate folder on the harbor host

cp harbor.test.cn.crt /data/cert/
cp harbor.test.cn.key /data/cert/

 

Convert harbor.test.cn.crtIt is harbor.test.cn.cert for docker

The docker daemon interprets CRT file as CA certificate and cert file as client certificate

openssl x509 -inform PEM -in yourdomain.com.crt -out yourdomain.com.cert

Copy the server certificate, key and Ca file to the docker certificate folder on the harbor host. You must create the corresponding folder first

cp harbor.test.cn.cert /etc/docker/certs.d/harbor.test.cn/
cp harbor.test.cn.key /etc/docker/certs.d/harbor.test.cn/
cp ca.crt /etc/docker/certs.d/harbor.test.cn/

 

Restart docker

systemctl restart docker

 

2.6 deploying harbor

Modify the configuration file harbor.yml

Two areas need to be modified

1. Modify the host name

2. Modify the path of certificate and key

 

Run script to enable HTTPS

[[email protected] ~]# cd harbor
[[email protected] harbor]# ./prepare 

 

Execute install.sh when finished

[[email protected] harbor]# ./install.sh 

 

Harbor related commands

Looking at the results, you can see that they are all up, and port 80 and 443 mapping are started at the same time

[[email protected] harbor]# docker-compose ps
      Name                     Command                  State                          Ports                   
---------------------------------------------------------------------------------------------------------------
harbor-core         /harbor/harbor_core              Up (healthy)                                              
harbor-db           /docker-entrypoint.sh            Up (healthy)   5432/tcp                                   
harbor-jobservice   /harbor/harbor_jobservice  ...   Up (healthy)                                              
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp                  
harbor-portal       nginx -g daemon off;             Up (healthy)   8080/tcp                                   
nginx               nginx -g daemon off;             Up (healthy)   0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp
redis               redis-server /etc/redis.conf     Up (healthy)   6379/tcp                                   
registry            /home/harbor/entrypoint.sh       Up (healthy)   5000/tcp                                   
registryctl         /home/harbor/start.sh            Up (healthy)      

 

Stop Gabor running

[[email protected] harbor]# docker-compose down -v
Stopping nginx             ... done
Stopping harbor-jobservice ... done
Stopping harbor-core       ... done
Stopping harbor-portal     ... done
Stopping redis             ... done
Stopping registryctl       ... done
Stopping harbor-db         ... done
Stopping registry          ... done
Stopping harbor-log        ... done
Removing nginx             ... done
Removing harbor-jobservice ... done
Removing harbor-core       ... done
Removing harbor-portal     ... done
Removing redis             ... done
Removing registryctl       ... done
Removing harbor-db         ... done
Removing registry          ... done
Removing harbor-log        ... done
Removing network harbor_harbor

 

Start the Gabor process

[[email protected] harbor]# docker-compose up -d
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating registry      ... done
Creating harbor-db     ... done
Creating redis         ... done
Creating harbor-portal ... done
Creating registryctl   ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done

 

two point seven   Login test

Browser login https://harbor.test.cn You need to configure hosts. Account, admin password, harbor12345

Create a Jenkins user for Jenkins to use

 

Create a new Jenkins project for subsequent Jenkins deployment

 

Add members

 

 

 

 

Modify the docker warehouse connection mode to HTTP on the server where the image needs to be uploaded, otherwise the default HTTPS cannot connect. Take the sonarqube image I modified on harbor.test.cn as an example

VIM / etc / docker / daemon.json join

{

"insecure-registries" : ["harbor.test.cn"]

}

 

Restart docker to take effect

systemctl restart docker

3、 Jenkins

install

Jenkins website address: https://www.jenkins.io/zh/

It can be installed according to different systems, including lts version, long-term support version and weekly update version

[[email protected] ~]# wget https://mirrors.tuna.tsinghua.edu.cn/jenkins/redhat-stable/jenkins-2.235.2-1.1.noarch.rpm

 

Install JDK

[[email protected] harbor]# yum install -y java-1.8.0-openjdk

 

Install Jenkins

[[email protected] harbor]# yum localinstall -y jenkins-2.235.2-1.1.noarch.rpm 

 

Then we set the boot up and start Jenkins

[[email protected] harbor]# systemctl enable jenkins 
[[email protected] harbor]# systemctl start jenkins

 

 

We visit the Jenkins page: http://192.168.184.34 : 8080 /, you can see that Jenkins can be initialized

Follow the path prompted to view the password

 

 

 

Choose to install the plug-in. The first is the default installation, and the second is manual. Here, select the default

 

 

 

After installing the plug-in, create a new user

 

 

We install the plug-in cloudbees docker build and publish plugin

 

 

 

After the installation, we create a new item

 

 

 

In the source code management, choose SVN on demand, and git on demand. I choose Django, the test code I wrote on the gitee website. Similarly, the default branch should be master. I fill in development on demand, and I fill in it on demand.

Note that dockerfile and code must be placed in the same place (the top level of the current directory), so that the docker build and publish plug-in can take effect

 

 

 

Dockerfile

FROM python:3.6-alpine

ENV PYTHONUNBUFFERED 1

WORKDIR /app

RUN pip install django -i https://pypi.douban.com/simple

COPY . /app

CMD python /app/manage.py runserver 0.0.0.0:8000

 

 

 

 

 

Then select docker build and   publish

 

 

 

 

 

The / var / run / docker.sock 666 permission for soft.test.cn

[[email protected] harbor]# chmod 666  /var/run/docker.sock

 

Add the build again and select execute shell

 

 

The command says this in order to execute kubectl on the master host and deploy pod

 

 

 

ssh 192.168.184.31 "cd /data/jenkins/${JOB_NAME} && sh rollout.sh ${BUILD_NUMBER}"

 

My script is like this:

[[email protected] fanli_admin]# cat rollout.sh 
#!/bin/bash
workdir="/data/jenkins/fanli_admin"
project="fanli_admin"
job_number=$(date +%s)
cd ${workdir}
oldversion=$(cat ${project}_delpoyment.yaml | grep "image:" | awk -F ':' '{print $NF}')
newversion=$1

echo "old version is: "${oldversion}
echo "new version is: "${newversion}

##tihuan jingxiangbanben
sed -i.bak${job_number} 's/'"${oldversion}"'/'"${newversion}"'/g' ${project}_delpoyment.yaml

##zhixing shengjibanben
kubectl apply -f ${project}_delpoyment.yaml --record=true
[root[email protected] fanli_admin]# 

 

Of course, the soft.test.cn host root user is configured to access the master.test.cn root user without key

[ [email protected]  Gabor] # SSH keygen # generate key
[ [email protected]  Gabor] # SSH copy ID 192.168.184.31 # pass the public key

 

 

Click save

 

Change to root and start Jenkins

[[email protected] harbor]# vim /etc/sysconfig/jenkins

 Jenkins_ USER="jenkins"
Amend to read
JENKINS_USER="root"

 

Restart Jenkins

[[email protected] harbor]# systemctl restart jenkins

 

4、 K8s configuration and deployment of Jenkins project

Configure k8s to log in to Gabor

Create Secrets

kubectl create secret docker-registry harbor-login-registry --docker-server=harbor.test.cn --docker-username=jenkins --docker-password=123456 

 

 

Deploying ingress

Master.test.cn host

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml

 

You’ll see later that the installation is complete

[[email protected] fanli_admin]# kubectl get po,svc -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-c9qfd        0/1     Completed   0          4d2h
pod/ingress-nginx-admission-patch-6bdn4         0/1     Completed   0          4d2h
pod/ingress-nginx-controller-8555c97f66-d7tlj   1/1     Running     2          4d2h

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.110.52.209           80:30657/TCP,443:30327/TCP   4d2h
service/ingress-nginx-controller-admission   ClusterIP   10.102.19.224           443/TCP                      4d2h

 

Preparation before deployment

Will Fanli_ admin_ svc.yaml    fanli_ admin_ ingress.yaml   Put it in the same directory as rollout. Sh

Git + Jenkins + harbor + k8s (kubernetes) for automatic deploymentGit + Jenkins + harbor + k8s (kubernetes) for automatic deployment

[[email protected] fanli_admin]# cat fanli_admin_delpoyment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fanliadmin
  namespace: default
spec:
  minReadySeconds: 5
  strategy:
  # indicate which strategy we want for rolling update
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  replicas: 2
  selector:
    matchLabels:
      app: fanli_admin
  template:
    metadata:
      labels:
        app: fanli_admin
    spec:
      imagePullSecrets:
      - name: harbor-login-registry
      terminationGracePeriodSeconds: 10
      containers:
      - name: fanliadmin
        image: harbor.test.cn/jenkins/django:26
        imagePullPolicy: IfNotPresent
        ports:
        -Containerport: 8000? External access port
          name: web
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /members
            port: 8000
          Initialdelayseconds: 60 ᦇ after the container is initialized, wait for 60 seconds for probe check
          timeoutSeconds: 5
          Failurethreshold: 12 # when pod starts successfully and the check fails, kubernetes will try failurethreshold times before giving up. Abandoning the survival check means restarting the pod. If the ready check is abandoned, the pod will be marked as not ready. The default value is 3. The minimum value is 1
        readinessProbe:
          httpGet:
            path: /members
            port: 8000
          initialDelaySeconds: 60
          timeoutSeconds: 5
          failureThreshold: 12

fanli_admin_delpoyment.yaml

Git + Jenkins + harbor + k8s (kubernetes) for automatic deploymentGit + Jenkins + harbor + k8s (kubernetes) for automatic deployment

[[email protected] fanli_admin]# cat fanli_admin_svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: fanliadmin
  namespace: default
  labels:
    app: fanli_admin
spec:
  selector:
    app: fanli_admin
  type: ClusterIP
  ports:
  - name: web
    port: 8000
    targetPort: 8000

fanli_admin_svc.yaml

 

Git + Jenkins + harbor + k8s (kubernetes) for automatic deploymentGit + Jenkins + harbor + k8s (kubernetes) for automatic deployment

[[email protected] fanli_admin]# cat fanli_admin_ingress.yaml 
apiVersion: extensions/v1beta1 
kind: Ingress 
metadata:
  name: fanliadmin
spec:
  rules:
    - host: fanli.test.cn
      http:
        paths: 
          - path: /  
            backend:
              serviceName: fanliadmin
              servicePort: 8000

fanli_admin_ingress.yaml

 

 

deploy

Jenkins home page click on the project, build now

 

 

 

Click finish and it will be deployed automatically

Click the console output to view the detailed deployment process

 

 

See the console output success, that has been successful

 

Now let’s look at k8s

[[email protected] fanli_admin]# kubectl get pod 
NAME                          READY   STATUS    RESTARTS   AGE
fanliadmin-5575cc56ff-pdk4r   1/1     Running   0          3m3s
fanliadmin-5575cc56ff-sz2j5   1/1     Running   0          3m4s

 

[[email protected] fanli_admin]# kubectl rollout history deployment/fanliadmin
deployment.apps/fanliadmin 
REVISION  CHANGE-CAUSE
2         
3         kubectl apply --filename=fanli_admin_delpoyment.yaml --record=true
4         kubectl apply --filename=fanli_admin_delpoyment.yaml --record=true
5         kubectl apply --filename=fanli_admin_delpoyment.yaml --record=true

 

The new version has been deployed successfully

5、 Visit

Access through ingress

[[email protected] fanli_admin]# kubectl get pod,svc -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-c9qfd        0/1     Completed   0          4d2h
pod/ingress-nginx-admission-patch-6bdn4         0/1     Completed   0          4d2h
pod/ingress-nginx-controller-8555c97f66-d7tlj   1/1     Running     2          4d2h

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.110.52.209           80:30657/TCP,443:30327/TCP   4d2h
service/ingress-nginx-controller-admission   ClusterIP   10.102.19.224           443/TCP                      4d2h

 

We can see that to access port 80 from the Internet, we need to access port 30657 of ingress, and port 443 needs to access port 30327

At this time, we can see our own fanliadmin_ The domain name bound to inress.yaml is fanli.test.cn

Binding hosts in Windows computer

Then visit http://fanli.test.cn :30657/members

 

It’s been successful

Recommended Today

Large scale distributed storage system: Principle Analysis and architecture practice.pdf

Focus on “Java back end technology stack” Reply to “interview” for full interview information Distributed storage system, which stores data in multiple independent devices. Traditional network storage system uses centralized storage server to store all data. Storage server becomes the bottleneck of system performance and the focus of reliability and security, which can not meet […]