After migrating from docker to docker swarm and then to kubernetes, and handling all kinds of API changes over the years, I am very happy to find and fix problems in deployment.
Today, I’d like to share 5 troubleshooting tips that I think are most useful, as well as some other usage tips.
Kubectl – “Swiss Army Knife”
Kubectl is our Swiss Army knife. We often use them when there is a problem. How to use them when there is a problem is very important. Let’s start with five “actual cases” and see how to use them when there is a problem.
The situation will be: my yaml has been accepted, but my service has not started and started, but it does not work properly.
1.kubectl get deployment/pods
The reason why this command is so important is that it can display useful information without displaying a large amount of content.
If you want to use deployment for your workload, you have two options:
Kubectl get deploy kubectl get deploy - N namespace kubectl get deploy – all namespaces [or "- a"]
Ideally, you want to see 1 / 1 or 2 / 2 of the equivalent, and so on. This indicates that your deployment has been accepted and attempted.
Next, you may need to check kubectl get pod to see if the deployed backup pod is started correctly.
2. kubectl get events
I was surprised that I had to often explain this tip to people who had problems with kubernetes. This command prints out events in a given namespace and is ideal for finding critical problems, such as a crashed pod or a container image that cannot be pulled.
The logs in kubernetes are “unordered”, so you will need to add the following from the openfaas document.
$ kubectl get events --sort-by=.metadata.creationTimestamp
Another close command of kubectl get event is kubectl describe, which works together with the name of the object, just like get deploy / Pod:
kubectl describe deploy/figlet -n openfaas
You will get very detailed information here. You can describe most things, including nodes, which will show that the pod cannot be started due to resource constraints or other problems.
3. kubectl logs
This command must be used frequently, but many people use it in the wrong way.
If you deploy, such as cert manager in the cert manager namespace, many people think they must first find the long (unique) name of the pod and use it as a parameter. incorrect.
kubectl logs deploy/cert-manager -n cert-manager
To track logs, add – F
kubectl logs deploy/cert-manager -n cert-manager -f
You can combine all three.
If your deployment or pod has any tags, you can use – L app = name or any other tag set to attach to the logs of one or more matching pods.
kubectl logs -l app=nginx
There are tools, such as stern and Kail, that can help you match patterns and save some typing, but I find them distracting.
4.kubectl get -o yaml
When you start using yaml generated by another project or other tools such as helm, you will need it soon. It is also useful in production to check the version of the image or the comments you set somewhere.
kubectl run nginx-1 --image=nginx --port=80 --restart=Always
kubectl get deploy/nginx-1 -o yaml
Now we know. Moreover, we can add – export and save yaml locally for editing and reapplying.
Another option for real-time editing yaml is kubectl edit. If you are confused about vim and don’t know how to use it, please add visual = nano before the command and use this simplified editor.
5. Kubectl scale have you turned it on and off?
Kubectl scale can be used to reduce the deployment and its pod to zero copies, actually killing all copies. When you scale it back to 1 / 1, a new pod is created and your application is restarted.
The syntax is simple enough that you can restart the code and test it again.
kubectl scale deploy/nginx-1 --replicas=0 kubectl scale deploy/nginx-1 --replicas=1
6. Port forwarding
We need this technique. Port forwarding through kubectl allows us to expose a service on a local or remote cluster on our own computer so that we can access it on any configured port without exposing it on the Internet.
The following is an example of accessing an nginx deployment locally:
kubectl port-forward deploy/nginx-1 8080:80
Some people think that this only applies to deployment or pod, which is wrong. Services are fair, usually the choice of forwarding, because they will simulate the configuration in the production cluster.
If you really want to expose the service on the Internet, you usually use the loadbalancer service or run kubectl:
kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer
I hope you find these six commands and techniques useful. Now you can test them on a real cluster.
The above isLiangxu tutorial networkShare the tips for troubleshooting applications on kubernetes.
This article is composed of blog one article multi posting platformOpenWriterelease!