Kubernetes Pod Restarts – How to Find the Restart Reasons

kubectlkubernetes

In have scaled my pods to 20 in my cluster and when I see the next day the few of the scaled pods were recreated.

When I say POD recreated, it is getting deleted and created freshly and the timestamp of the recreated pod and the scaled pods vary.

I was unable to find the reasons for the recreate of the PODs.

I could not find which POD went for a recreate as the POD is deleted and gone away. There are no logs in the journalctl regarding which POD got recreated. Is there any way I can debug further to find the reason for the POD recreate. or What might be the reason for the PODs getting deleted.

Note: I have readiness and liveness probes defined, but these probes would act on container and would not lead to POD recreate is my understanding.

Best Answer

Basically you need to check pod's events (here is a tip how to do it).

Keep in mind, that events retention period is too short (approximately 1h), and you may need to store them somewhere else. Here is a good article how to do it with EFK stack

The most common reason of PODs recreation - node unavailability. If pod's node is not available, Kubernetes recreates pods at other nodes.

Related Topic