Git + Jenkins + harbor + k8s (kubernetes) for automatic deployment


System environment:

centos 7 Of course, any git server will do

Jenkins: lts version, deployed on the server, not deployed in k8s cluster

Harbor: offline version, used to store docker image

Kubernetes cluster is built in kubedm mode for convenience. There are three kubernetes clusters, and IPVS is enabled. The specific server usage is described as follows:

HOSTNAME IP address Server usage k8s-master k8s-node1 k8s-node2 harbor、jenkins


1、 Kubernetes building

Reference for this node:

1.1 system configuration

First, write hosts for 4 hosts

 [[email protected] ~]# cat /etc/hosts


Close swap:
Temporarily Closed

swapoff -a

Permanently close (delete or comment the line swap, and restart)

vim /etc/fstab

Close all firewalls

systemctl stop firewalld
systemctl disable firewalld

Disable SELinux:

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

Restart SELinux to take effect

Transfer the bridged IPv4 traffic to the iptables chain to make the setting take effect

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOFsysctl --system

1.2 preconditions for Kube proxy to open IPVS

Since IPVS has been added to the kernel backbone, the following kernel modules need to be loaded before opening IPVS for Kube proxy:






Execute the following script on all kubernetes nodes master, node1 and node2:

Cat > / etc / sysconfig / modules / IPVS. Modules < you can see that it has taken effect:
nf_conntrack_ipv4      15053  5 
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  16 
ip_vs                 145497  22 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

The / etc / sysconfig / modules / ipvs.modules file created by the script ensures that the required modules can be loaded automatically after the node is restarted. Using lsmod | grep – e IP_ vs -e nf_ conntrack_ The IPv4 command checks to see if the required kernel modules have been loaded correctly.

Install ipset package on all nodes, and install ipvsadm (optional) for viewing IPVS rules

 yum install ipset ipvsadm

1.3 install docker (all nodes)

Configure docker domestic source (alicloud)

wget -O /etc/yum.repos.d/docker-ce.repo

Install the latest version of docker CE

yum -y install docker-ce
systemctl enable docker && systemctl start docker
docker --version

Configure the source of kubernetes (alicloud)

cat > /etc/yum.repos.d/kubernetes.repo << EOF

Manually import gpgkey or close it




rpmkeys --import
rpmkeys --import


To start installing kubedm and kubelet:

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet
systemctl start kubelet


1.4 start deploying kubernetes

Initialize master

kubeadm init \
--apiserver-advertise-address= \
--image-repository \
--kubernetes-version v1.18.6 \


Focus on output

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join --token a9vg9z.dlboqvfuwwzauufq \
    --discovery-token-ca-cert-hash sha256:c2ade88a856f15de80240ff4994661a6daa668113cea0c4a4073f701f05192cb

Execute the following command to initialize the current user configuration, which will be used in kubectl   Master node execution:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install pod network plug-in

kubectl apply -f

Execute the following join command on each node (join the cluster)

kubeadm join --token a9vg9z.dlboqvfuwwzauufq --discovery-token-ca-cert-hash sha256:c2ade88a856f15de80240ff4994661a6daa668113cea0c4a4073f701f05192cb


Use kubectl get po – A – O wide to make sure all pods are running.

[[email protected] ~]# kubectl get po -A -o wide
NAMESPACE       NAME                                        READY   STATUS      RESTARTS   AGE    IP               NODE             NOMINATED NODE   READINESS GATES
kube-system     coredns-5c579bbb7b-flzkd                    1/1     Running     5          2d               
kube-system     coredns-5c579bbb7b-qz5m8                    1/1     Running     4          2d               
kube-system                         1/1     Running     5          2d              
kube-system               1/1     Running     5          2d              
kube-system      1/1     Running     5          2d              
kube-system     kube-flannel-ds-amd64-bhmps                 1/1     Running     6          2d               
kube-system     kube-flannel-ds-amd64-mbpvb                 1/1     Running     6          2d               
kube-system     kube-flannel-ds-amd64-xnw2l                 1/1     Running     6          2d              
kube-system     kube-proxy-8nkgs                            1/1     Running     6          2d               
kube-system     kube-proxy-jxtfk                            1/1     Running     4          2d              
kube-system     kube-proxy-ls7xg                            1/1     Running     4          2d               
kube-system               1/1     Running     4          2d

Kube proxy opens IPVS

#Modify the config.conf in the Kube system / Kube proxy of configmap, change the mode: "to mode:" IPVS ", save and exit
[[email protected] centos]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
###Delete the previous proxy pod
[[email protected] centos]# kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-2m5jh" deleted
pod "kube-proxy-nfzfl" deleted
pod "kube-proxy-shxdt" deleted
#View the running status of proxy
[[email protected] centos]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-54qnw                              1/1     Running   0          24s
kube-proxy-bzssq                              1/1     Running   0          14s
kube-proxy-cvlcm                              1/1     Running   0          37s
#Check the log. If there is' using IPVS proxy. 'it means that the IPVS of Kube proxy is successfully opened!
[[email protected] centos]# kubectl logs kube-proxy-54qnw -n kube-system
I0518 20:24:09.319160       1 server_others.go:176] Using ipvs Proxier.
W0518 20:24:09.319751       1 proxier.go:386] IPVS scheduler not specified, use rr by default
I0518 20:24:09.320035       1 server.go:562] Version: v1.14.2
I0518 20:24:09.334372       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0518 20:24:09.334853       1 config.go:102] Starting endpoints config controller
I0518 20:24:09.334916       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0518 20:24:09.334945       1 config.go:202] Starting service config controller
I0518 20:24:09.334976       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0518 20:24:09.435153       1 controller_utils.go:1034] Caches are synced for service config controller
I0518 20:24:09.435271       1 controller_utils.go:1034] Caches are synced for endpoints config controller

Check whether the node is ready

[[email protected] ~]# kubectl get node
NAME STATUS ROLES AGE VERSION Ready master 2d v1.18.6 Ready 2d v1.18.6 Ready 2d v1.18.6


So far, k8s has been successfully installed in kubedm mode and IPVS scheme

2、 Harbor installation

2.1 preparation

Gabor is started through docker compose. First of all, we need tosoft.test.cnInstall docker compose on node

curl -L`uname -s`-`uname -m` > /usr/local/bin/docker-compose 


2.2 download harbor

Official address of harbor:

Then install according to the official installation document

For the setting method here, I pasted my own domain name. If I actually operate, I need to replace with your own domain name


Download offline package



Unzip the installation package

[[email protected] ~]# tar zxvf harbor-offline-installer-v1.10.4.tgz


2.3 setting up HTTPS

Generate CA certificate private key

openssl genrsa -out ca.key 4096

Generate CA certificate

openssl req -x509 -new -nodes -sha512 -days 3650 \
 -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/" \
 -key ca.key \
 -out ca.crt

2.4 generating server certificate

Certificates usually contain files and files, such as.crt.key

Generate private key

openssl genrsa -out 4096


Generate certificate signing request (CSR)

Adjust the values in the options to reflect your organization. If you use FQDN to connect to the port host, you must specify it as the public name () attribute and use it in the key and CSR file name.-subj CN

openssl req -sha512 -new \
    -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/" \
    -key \


Generate x509 V3 extension file

Whether you use FQDN or IP address to connect to the port host, you must create this file so that certificates can be generated for port hosts that meet the subject alternative name (SAN) and x509 V3 extension requirements. Replace the entry to reflect your domain.

cat > v3.ext <

Use this file to generate certificates for port hosts

Replace the in CRS and CRT file names with the harbor host name

openssl x509 -req -sha512 -days 3650 \
    -extfile v3.ext \
    -CA ca.crt -CAkey ca.key -CAcreateserial \
    -in \



two point five   Provide certificates to harbor and docker

After generating ca.crt, and files, you must provide them to harbor and docker, and reconfigure harbor to use them

Copy the server certificate and key to the certificate folder on the harbor host

cp /data/cert/
cp /data/cert/


Convert is for docker

The docker daemon interprets CRT file as CA certificate and cert file as client certificate

openssl x509 -inform PEM -in -out

Copy the server certificate, key and Ca file to the docker certificate folder on the harbor host. You must create the corresponding folder first

cp /etc/docker/certs.d/
cp /etc/docker/certs.d/
cp ca.crt /etc/docker/certs.d/


Restart docker

systemctl restart docker


2.6 deploying harbor

Modify the configuration file harbor.yml

Two areas need to be modified

1. Modify the host name

2. Modify the path of certificate and key


Run script to enable HTTPS

[[email protected] ~]# cd harbor
[[email protected] harbor]# ./prepare 


Execute when finished

[[email protected] harbor]# ./ 


Harbor related commands

Looking at the results, you can see that they are all up, and port 80 and 443 mapping are started at the same time

[[email protected] harbor]# docker-compose ps
      Name                     Command                  State                          Ports                   
harbor-core         /harbor/harbor_core              Up (healthy)                                              
harbor-db           /            Up (healthy)   5432/tcp                                   
harbor-jobservice   /harbor/harbor_jobservice  ...   Up (healthy)                                              
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)>10514/tcp                  
harbor-portal       nginx -g daemon off;             Up (healthy)   8080/tcp                                   
nginx               nginx -g daemon off;             Up (healthy)>8080/tcp,>8443/tcp
redis               redis-server /etc/redis.conf     Up (healthy)   6379/tcp                                   
registry            /home/harbor/       Up (healthy)   5000/tcp                                   
registryctl         /home/harbor/            Up (healthy)      


Stop Gabor running

[[email protected] harbor]# docker-compose down -v
Stopping nginx             ... done
Stopping harbor-jobservice ... done
Stopping harbor-core       ... done
Stopping harbor-portal     ... done
Stopping redis             ... done
Stopping registryctl       ... done
Stopping harbor-db         ... done
Stopping registry          ... done
Stopping harbor-log        ... done
Removing nginx             ... done
Removing harbor-jobservice ... done
Removing harbor-core       ... done
Removing harbor-portal     ... done
Removing redis             ... done
Removing registryctl       ... done
Removing harbor-db         ... done
Removing registry          ... done
Removing harbor-log        ... done
Removing network harbor_harbor


Start the Gabor process

[[email protected] harbor]# docker-compose up -d
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating registry      ... done
Creating harbor-db     ... done
Creating redis         ... done
Creating harbor-portal ... done
Creating registryctl   ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done


two point seven   Login test

Browser login You need to configure hosts. Account, admin password, harbor12345

Create a Jenkins user for Jenkins to use


Create a new Jenkins project for subsequent Jenkins deployment


Add members





Modify the docker warehouse connection mode to HTTP on the server where the image needs to be uploaded, otherwise the default HTTPS cannot connect. Take the sonarqube image I modified on as an example

VIM / etc / docker / daemon.json join


"insecure-registries" : [""]



Restart docker to take effect

systemctl restart docker

3、 Jenkins


Jenkins website address:

It can be installed according to different systems, including lts version, long-term support version and weekly update version

[[email protected] ~]# wget


Install JDK

[[email protected] harbor]# yum install -y java-1.8.0-openjdk


Install Jenkins

[[email protected] harbor]# yum localinstall -y jenkins-2.235.2-1.1.noarch.rpm 


Then we set the boot up and start Jenkins

[[email protected] harbor]# systemctl enable jenkins 
[[email protected] harbor]# systemctl start jenkins



We visit the Jenkins page: : 8080 /, you can see that Jenkins can be initialized

Follow the path prompted to view the password




Choose to install the plug-in. The first is the default installation, and the second is manual. Here, select the default




After installing the plug-in, create a new user



We install the plug-in cloudbees docker build and publish plugin




After the installation, we create a new item




In the source code management, choose SVN on demand, and git on demand. I choose Django, the test code I wrote on the gitee website. Similarly, the default branch should be master. I fill in development on demand, and I fill in it on demand.

Note that dockerfile and code must be placed in the same place (the top level of the current directory), so that the docker build and publish plug-in can take effect





FROM python:3.6-alpine



RUN pip install django -i

COPY . /app

CMD python /app/ runserver






Then select docker build and   publish






The / var / run / docker.sock 666 permission for

[[email protected] harbor]# chmod 666  /var/run/docker.sock


Add the build again and select execute shell



The command says this in order to execute kubectl on the master host and deploy pod




ssh "cd /data/jenkins/${JOB_NAME} && sh ${BUILD_NUMBER}"


My script is like this:

[[email protected] fanli_admin]# cat 
job_number=$(date +%s)
cd ${workdir}
oldversion=$(cat ${project}_delpoyment.yaml | grep "image:" | awk -F ':' '{print $NF}')

echo "old version is: "${oldversion}
echo "new version is: "${newversion}

##tihuan jingxiangbanben
sed -i.bak${job_number} 's/'"${oldversion}"'/'"${newversion}"'/g' ${project}_delpoyment.yaml

##zhixing shengjibanben
kubectl apply -f ${project}_delpoyment.yaml --record=true
[root[email protected] fanli_admin]# 


Of course, the host root user is configured to access the root user without key

[ [email protected]  Gabor] # SSH keygen # generate key
[ [email protected]  Gabor] # SSH copy ID # pass the public key



Click save


Change to root and start Jenkins

[[email protected] harbor]# vim /etc/sysconfig/jenkins

 Jenkins_ USER="jenkins"
Amend to read


Restart Jenkins

[[email protected] harbor]# systemctl restart jenkins


4、 K8s configuration and deployment of Jenkins project

Configure k8s to log in to Gabor

Create Secrets

kubectl create secret docker-registry harbor-login-registry --docker-username=jenkins --docker-password=123456 



Deploying ingress host

kubectl apply -f


You’ll see later that the installation is complete

[[email protected] fanli_admin]# kubectl get po,svc -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-c9qfd        0/1     Completed   0          4d2h
pod/ingress-nginx-admission-patch-6bdn4         0/1     Completed   0          4d2h
pod/ingress-nginx-controller-8555c97f66-d7tlj   1/1     Running     2          4d2h

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort           80:30657/TCP,443:30327/TCP   4d2h
service/ingress-nginx-controller-admission   ClusterIP           443/TCP                      4d2h


Preparation before deployment

Will Fanli_ admin_ svc.yaml    fanli_ admin_ ingress.yaml   Put it in the same directory as rollout. Sh

Git + Jenkins + harbor + k8s (kubernetes) for automatic deploymentGit + Jenkins + harbor + k8s (kubernetes) for automatic deployment

[[email protected] fanli_admin]# cat fanli_admin_delpoyment.yaml
apiVersion: apps/v1
kind: Deployment
  name: fanliadmin
  namespace: default
  minReadySeconds: 5
  # indicate which strategy we want for rolling update
    type: RollingUpdate
      maxSurge: 1
      maxUnavailable: 1
  replicas: 2
      app: fanli_admin
        app: fanli_admin
      - name: harbor-login-registry
      terminationGracePeriodSeconds: 10
      - name: fanliadmin
        imagePullPolicy: IfNotPresent
        -Containerport: 8000? External access port
          name: web
          protocol: TCP
            path: /members
            port: 8000
          Initialdelayseconds: 60 ᦇ after the container is initialized, wait for 60 seconds for probe check
          timeoutSeconds: 5
          Failurethreshold: 12 # when pod starts successfully and the check fails, kubernetes will try failurethreshold times before giving up. Abandoning the survival check means restarting the pod. If the ready check is abandoned, the pod will be marked as not ready. The default value is 3. The minimum value is 1
            path: /members
            port: 8000
          initialDelaySeconds: 60
          timeoutSeconds: 5
          failureThreshold: 12


Git + Jenkins + harbor + k8s (kubernetes) for automatic deploymentGit + Jenkins + harbor + k8s (kubernetes) for automatic deployment

[[email protected] fanli_admin]# cat fanli_admin_svc.yaml 
apiVersion: v1
kind: Service
  name: fanliadmin
  namespace: default
    app: fanli_admin
    app: fanli_admin
  type: ClusterIP
  - name: web
    port: 8000
    targetPort: 8000



Git + Jenkins + harbor + k8s (kubernetes) for automatic deploymentGit + Jenkins + harbor + k8s (kubernetes) for automatic deployment

[[email protected] fanli_admin]# cat fanli_admin_ingress.yaml 
apiVersion: extensions/v1beta1 
kind: Ingress 
  name: fanliadmin
    - host:
          - path: /  
              serviceName: fanliadmin
              servicePort: 8000





Jenkins home page click on the project, build now




Click finish and it will be deployed automatically

Click the console output to view the detailed deployment process



See the console output success, that has been successful


Now let’s look at k8s

[[email protected] fanli_admin]# kubectl get pod 
NAME                          READY   STATUS    RESTARTS   AGE
fanliadmin-5575cc56ff-pdk4r   1/1     Running   0          3m3s
fanliadmin-5575cc56ff-sz2j5   1/1     Running   0          3m4s


[[email protected] fanli_admin]# kubectl rollout history deployment/fanliadmin
3         kubectl apply --filename=fanli_admin_delpoyment.yaml --record=true
4         kubectl apply --filename=fanli_admin_delpoyment.yaml --record=true
5         kubectl apply --filename=fanli_admin_delpoyment.yaml --record=true


The new version has been deployed successfully

5、 Visit

Access through ingress

[[email protected] fanli_admin]# kubectl get pod,svc -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-c9qfd        0/1     Completed   0          4d2h
pod/ingress-nginx-admission-patch-6bdn4         0/1     Completed   0          4d2h
pod/ingress-nginx-controller-8555c97f66-d7tlj   1/1     Running     2          4d2h

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort           80:30657/TCP,443:30327/TCP   4d2h
service/ingress-nginx-controller-admission   ClusterIP           443/TCP                      4d2h


We can see that to access port 80 from the Internet, we need to access port 30657 of ingress, and port 443 needs to access port 30327

At this time, we can see our own fanliadmin_ The domain name bound to inress.yaml is

Binding hosts in Windows computer

Then visit :30657/members


It’s been successful

Recommended Today

Large scale distributed storage system: Principle Analysis and architecture practice.pdf

Focus on “Java back end technology stack” Reply to “interview” for full interview information Distributed storage system, which stores data in multiple independent devices. Traditional network storage system uses centralized storage server to store all data. Storage server becomes the bottleneck of system performance and the focus of reliability and security, which can not meet […]