59/100 Troubleshoot Deployment issues in Kubernetes
Last week, the Nautilus DevOps team deployed a redis app on Kubernetes cluster, which was working fine so far. This morning one of the team members was making some changes in this existing setup, but he made some mistakes and the app went down. The deployment name is redis-deployment. The pods are not in running state right now, so please look into the issue and fix the same.
thor@jumphost ~$ kubectl get deployment redis-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
redis-deployment 0/1 1 0 72s
thor@jumphost ~$ kubectl get pods -l app=redis
NAME READY STATUS RESTARTS AGE
redis-deployment-54cdf4f76d-tpd68 0/1 ContainerCreating 0 102s
Use the pod name from above to inspect the pod details (describes it):
thor@jumphost ~$ kubectl describe pod redis-deployment-54cdf4f76d-tpd68
Name: redis-deployment-54cdf4f76d-tpd68
Namespace: default
Priority: 0
Service Account: default
Node: kodekloud-control-plane/172.17.0.2
Start Time: Thu, 23 Oct 2025 09:56:15 +0000
Labels: app=redis
pod-template-hash=54cdf4f76d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/redis-deployment-54cdf4f76d
Containers:
redis-container:
Container ID:
Image: redis:alpin
Image ID:
Port: 6379/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 300m
Environment: <none>
Mounts:
/redis-master from config (rw)
/redis-master-data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26kqb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: redis-conig
Optional: false
kube-api-access-26kqb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m53s default-scheduler Successfully assigned default/redis-deployment-54cdf4f76d-tpd68 to kodekloud-control-plane
Warning FailedMount 50s kubelet Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition
Warning FailedMount 45s (x9 over 2m53s) kubelet MountVolume.SetUp failed for volume "config" : configmap "redis-conig" not foundThe last line clearly shows that there is a type for the configmap - it should be referencing redis-config however its showing configmap redis-conig not found.
Also if we review the file further, we see that the image name for the redis container also has a typo and is missing e from the end of alpin.
Edit the file with:
kubectl edit deployment redis-deployment
-- snipped --
defaultMode: 420
name: redis-conig <- Change this to read name: redis-config
name: config
-- snipped --
redis-container:
Container ID:
Image: redis:alpin <- Change this to read redis:alpine
thor@jumphost ~$ kubectl edit deployment redis-deployment
deployment.apps/redis-deployment edited
Now delete the stuck pod:
thor@jumphost ~$ kubectl delete pod redis-deployment-54cdf4f76d-tpd68
pod "redis-deployment-54cdf4f76d-tpd68" deletedCheck the rollout and then confirm the pods are up:
thor@jumphost ~$ kubectl rollout status deployment redis-deployment
deployment "redis-deployment" successfully rolled out
thor@jumphost ~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-deployment-7c8d4f6ddf-w889f 1/1 Running 0 2m56sSummary of fixes
| Issue | Root cause | Fix |
|---|---|---|
ImagePullBackOff |
Invalid image name redis:alpin |
Change to redis:alpine |
ContainerCreating |
Missing ConfigMap redis-conig |
Fix typo → redis-config or create ConfigMap |
| Old pods stuck | Outdated ReplicaSets | Delete them; Deployment recreates new pods |