Ah. The easiest thing to do would be to avoid deleting the service before every deployment. In my experience, services tend to be very long lived; they provide a nice, fixed way to refer to things without having to worry about dynamic values for ports, ips, dns, etc.
In the Kibana service spec, remove the nodePort entry from the port configuration, so that the service can do its own thing; one less thing to think about. Don't set values for loadBalancerIP or externalIPs. Same rules apply to the other services.
For the ELK stack config files (I don't recall off the top of my head what they look like), refer to other components by their service names: no need to use hardcoded IPs or anything. (No idea if you were doing this, but just in case.)
Allow the services to be created; get the loadbalancer external IP and plug it into your DNS config.
You can continue using namespaces if that's how you prefer to do things, but don't delete the whole namespace to clear out the Deployments for ELK components.
Split your ELK stack spec into separate files for Deployments and Services (technically, I'm not sure if this is required; you might be able to get away with ), so that you can use:
kubectl delete -f logging-deployments.yaml
kubectl apply -f logging-deployments.yaml
or a similar command to update the deployments without bothering the services.
If you need (or prefer) to delete the ELK stack in another manner before creating a new one, you can also use:
kubectl -n logging delete deployments --all
to delete all of the deployments within the logging namespace. To me, this option seems a little more dangerous than it needs to be.
A second option would be:
kubectl delete deployments kibana
kubectl delete deployments elasticsearch
kubectl delete deployments logstash
If you don't mind the extra typing
Another option would be to add a new label, something like:
role: application
or
stack: ELK
to each of the Deployment specs. Than you can use:
kubectl delete deployments -l stack=ELK
to limit the scope of the deletion... but again, this seems dangerous.
My preference would be, unless there is some overriding reason not to, to split the config into >2 files and use:
kubectl create -f svc-logging.yaml
kubectl create -f deploy-logging.yaml
kubectl delete -f deploy-logging.yaml
kubectl apply -f deploy-logging.yaml
...
etc
in order to help prevent any nasty typo-induced accidents.
I break things down a little bit further, with a separate folder for each component that contains a deployment and service, nested together as makes sense (easier to keep in a repo, easier if more than one person need to make changes to related but separate components), and usually with bash create/destroy scripts to provide something like documentation... but you get the idea.
Set up this way, you should be able to update any or all deployment components without breaking your DNS/loadbalancing configuration.
(Of course, this all sort of assumes that having everything in one file is not some kind of hard requirement... in that case, I don't have a good answer for you off the top of my head...)
Best Answer
The official documentation has several recommendations regarding Debug Running Pods:
Examining pod logs: by executing
kubectl logs ${POD_NAME} ${CONTAINER_NAME}
orkubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
if your container has previously crashedDebugging with container exec: run commands inside a specific container with
kubectl exec
:kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when
kubectl exec
is insufficient because a container has crashed or a container image doesn't include debugging utilities. You can find an example here.Debugging via a shell on the node: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.
AKS also helps with that by offering the Container Insights:
More sources can be found below:
Enable monitoring of a new Azure Kubernetes Service (AKS) cluster
Monitor your Kubernetes cluster performance with Container insights
How to view Kubernetes logs, events, and pod metrics in real-time