K8s actual combat (11) | service: progress of service

Time:2021-1-7

preface

Ingress can be understood as the service of service, that is to build a service layer in front of the existing service, as the unified entrance of external traffic, to forward the request routing.

To put it bluntly, it is to build a nginx or haproxy on the front end, forward different hosts or URLs to the corresponding back-end service, and then transfer the service to the pod. It’s just that ingress decouples and abstracts nginx / haproxy.

Update history

The significance of ingress

Ingress makes up for some defects when the default service exposes the access to the Internet, such as the 7-layer URL rules at the unified entrance can not be implemented, for example, a default service can only correspond to one back-end service.

Generally speaking, inress includes two parts: inress controller and inress object.

Ingress controller corresponds to nginx / haproxy program and runs in pod form.

The ingress object corresponds to the nginx / haproxy configuration file.

The inress controller uses the information described in the inress object to modify the rules of nginx / haproxy in its pod.

Deploying ingress

Prepare test resources

Deploy 2 services,
Access service 1 and return version 1
Access service 2 and return version 2

Program configuration of two services

# cat deployment.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-v1.0
spec:
  selector:
    matchLabels:
      app: v1.0
  replicas: 3
  template:
    metadata:
      labels:
        app: v1.0
    spec:
      containers:
      - name: hello-v1
        image: anjia0532/google-samples.hello-app:1.0
        ports:
        - containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-v2.0
spec:
  selector:
    matchLabels:
      app: v2.0
  replicas: 3
  template:
    metadata:
      labels:
        app: v2.0
    spec:
      containers:
      - name: hello-v2
        image: anjia0532/google-samples.hello-app:2.0
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: service-v1
spec:
  selector:
    app: v1.0
  ports:
  - port: 8081
    targetPort: 8080
    protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: service-v2
spec:
  selector:
    app: v2.0
  ports:
  - port: 8081
    targetPort: 8080
    protocol: TCP

Let the container run on 8080 and the service run on 8081.

Start the two services and the corresponding pod

# kubectl apply -f deployment.yaml    
deployment.apps/hello-v1.0 created
deployment.apps/hello-v2.0 created
service/service-v1 created
service/service-v2 created

Check the startup status. Each service corresponds to 3 pods

# kubectl get pod,service -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
pod/hello-v1.0-6594bd8499-lt6nn   1/1     Running   0          37s   192.10.205.234   work01   <none>           <none>
pod/hello-v1.0-6594bd8499-q58cw   1/1     Running   0          37s   192.10.137.190   work03   <none>           <none>
pod/hello-v1.0-6594bd8499-zcmf4   1/1     Running   0          37s   192.10.137.189   work03   <none>           <none>
pod/hello-v2.0-6bd99fb9cd-9wr65   1/1     Running   0          37s   192.10.75.89     work02   <none>           <none>
pod/hello-v2.0-6bd99fb9cd-pnhr8   1/1     Running   0          37s   192.10.75.91     work02   <none>           <none>
pod/hello-v2.0-6bd99fb9cd-sx949   1/1     Running   0          37s   192.10.205.236   work01   <none>           <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   SELECTOR
service/service-v1   ClusterIP   192.20.92.221   <none>        8081/TCP   37s   app=v1.0
service/service-v2   ClusterIP   192.20.255.0    <none>        8081/TCP   36s   app=v2.0

View the installation of service backend pod

[[email protected] ~]# kubectl get ep service-v1
NAME         ENDPOINTS                                                     AGE
service-v1   192.10.137.189:8080,192.10.137.190:8080,192.10.205.234:8080   113s
[[email protected] ~]# kubectl get ep service-v2
NAME         ENDPOINTS                                                 AGE
service-v2   192.10.205.236:8080,192.10.75.89:8080,192.10.75.91:8080   113s

You can see that the two services have successfully mounted the corresponding pod.

Next, deploy the front-end ingress controller.

First, specify work01 / work02 two servers to run ingress controller

kubectl label nodes work01 ingress-ready=true
kubectl label nodes work02 ingress-ready=true

Ingress controller uses the official nginx version

wget -O ingress-controller.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml

Change it to start up two ingress controllers

# vim ingress-controller.yaml

apiVersion: apps/v1
kind: Deployment
 。。。。。。
 。。。。。。
  revisionHistoryLimit: 10
  Replicas: 2 # add this line

Change to domestic image

# vim ingress-controller.yaml

    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          #image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:[email protected]:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0
          imagePullPolicy: IfNotPresent

Deploying ingress controller

kubectl apply -f ingress-controller.yaml

Check the operation

# kubectl get pod,service -n ingress-nginx -o wide 
NAME                                            READY   STATUS      RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
pod/ingress-nginx-admission-create-ld4nt        0/1     Completed   0          15m   192.10.137.188   work03   <none>           <none>
pod/ingress-nginx-admission-patch-p5jmd         0/1     Completed   1          15m   192.10.75.85     work02   <none>           <none>
pod/ingress-nginx-controller-75f89c4965-vxt4d   1/1     Running     0          15m   192.10.205.233   work01   <none>           <none>
pod/ingress-nginx-controller-75f89c4965-zmjg2   1/1     Running     0          15m   192.10.75.87     work02   <none>           <none>

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP                   PORT(S)                      AGE   SELECTOR
service/ingress-nginx-controller             NodePort    192.20.105.10   192.168.10.17,192.168.10.17   80:30698/TCP,443:31303/TCP   15m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission   ClusterIP   192.20.80.208   <none>                        443/TCP                      15m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

You can see that ingress nginx controller pod is running on work01 / 02.

Write access request forwarding rules

# cat ingress.yaml 

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - host: test-v1.com
    http:
      paths:
      - path: /
        backend:
          serviceName: service-v1
          servicePort: 8081
  - host: test-v2.com
    http:
      paths:
      - path: /
        backend:
          serviceName: service-v2
          servicePort: 8081

Enable rules

# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/nginx-ingress created

You can see that the nginx configuration in ingress controller pod has taken effect

# kubectl exec ingress-nginx-controller-75f89c4965-vxt4d -n ingress-nginx -- cat /etc/nginx/nginx.conf | grep -A 30 test-v1.com

        server {
                server_name test-v1.com ;
                
                listen 80  ;
                listen 443  ssl http2 ;
                
                set $proxy_upstream_name "-";
                
                ssl_certificate_by_lua_block {
                        certificate.call()
                }
                
                location / {
                        
                        set $namespace      "default";
                        set $ingress_name   "nginx-ingress";
                        set $service_name   "service-v1";
                        set $service_port   "8081";
                        set $location_path  "/";

We access the test outside the cluster.

First, resolve the domain name to work01

# cat /etc/hosts
192.168.10.15 test-v1.com
192.168.10.15 test-v2.com

Access test

# curl test-v1.com
Hello, world!
Version: 1.0.0
Hostname: hello-v1.0-6594bd8499-svjnf

# curl test-v1.com
Hello, world!
Version: 1.0.0
Hostname: hello-v1.0-6594bd8499-zqjtm

# curl test-v1.com
Hello, world!
Version: 1.0.0
Hostname: hello-v1.0-6594bd8499-www76

# curl test-v2.com
Hello, world!
Version: 2.0.0
Hostname: hello-v2.0-6bd99fb9cd-h8862

# curl test-v2.com
Hello, world!
Version: 2.0.0
Hostname: hello-v2.0-6bd99fb9cd-sn84j

You can see that requests from different domains go to different pods under the correct service.

Request work02 again

# cat /etc/hosts
192.168.10.16 test-v1.com
192.168.10.16 test-v2.com

# curl test-v1.com
Hello, world!
Version: 1.0.0
Hostname: hello-v1.0-6594bd8499-www76

# curl test-v1.com
Hello, world!
Version: 1.0.0
Hostname: hello-v1.0-6594bd8499-zqjtm

# curl test-v2.com
Hello, world!
Version: 2.0.0
Hostname: hello-v2.0-6bd99fb9cd-sn84j

# curl test-v2.com
Hello, world!
Version: 2.0.0
Hostname: hello-v2.0-6bd99fb9cd-h8862

No problem.

How to be highly available

In front of work01 / work02, hang two LVS + kept to achieve high availability access to work01 / 02.
You can also directly use keepalived to create a VIP on work01 / work02 without additional machines, which saves costs.

Concluding remarks

In this paper, we use deployment + nodeport service to deploy ingress.

Deployment is used to manage the pod of address controller, and nodeport is used to expose address service.

View ingress service

# kubectl get service -o wide -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP                   PORT(S)                      AGE   SELECTOR
ingress-nginx-controller             NodePort    192.20.105.10   192.168.10.17,192.168.10.17   80:30698/TCP,443:31303/TCP   22m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

You can see that port 30698 is exposed to the outside world, and you can access the V1 / V2 pod by accessing port 30698 of any node.

But the port is random and will change after reconstruction. We can directly access port 80 of work01 / 02 running ingress controller pod.

In front of work01 / 02, build a set of LVS + kept for high availability load.

Using iptables – t NAT – L – N – V on work01 / 02, we can see that port 80 is open through NAT mode, and high traffic will have bottlenecks.

You can use the way of daemonset + hostnetwork to deploy the inress controller.

In this way, the exposed port 80 on work01 / 02 directly uses the host network without NAT mapping, which can avoid performance problems.

Contact me

WeChat official account: zuolinux_ COM

K8s actual combat (11) | service: progress of service

Recommended Today

Vue tab switch sliding transition effect, APP sideslip effect

Let’s see the effect firsthttps://yhq.leizhenxd.com/Say nothing, love to see and hear the code App.vue` <template> <div id=”app”> <transition :name=”transitionName” mode=”in-out”> <keep-alive> <router-view/> </keep-alive> </transition> <div id=”nav”> <router-link to=”/index.html”>…</router-link> <router-link to=”/optimus.html”>…</router-link> <router-link to=”/gao.html”>…</router-link> </div> </div> </template> <script> export default{ data(){ return{ transitionName:’left’ } }, watch:{ ‘$route’ (to, from) { if(toIdx < 10 && fromIdx < 10){ […]