Skip to content

Skaffold reports pod is ready based on phase not readiness state #5427

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
briandealwis opened this issue Feb 20, 2021 · 0 comments · Fixed by #6010
Closed

Skaffold reports pod is ready based on phase not readiness state #5427

briandealwis opened this issue Feb 20, 2021 · 0 comments · Fixed by #6010
Assignees
Labels
area/status-check kind/bug Something isn't working kind/friction Issues causing user pain that do not have a workaround priority/p1 High impact feature/bug.

Comments

@briandealwis
Copy link
Member

In CC-VSC, I saw in the output:

Status check started
Resource pod/mongodb-deployment-587b66548b-nmgsh status updated to In Progress
Resource pod/nodetodo-deployment-5dcbd4795b-94sqb status updated to In Progress
Resource pod/mongodb-deployment-587b66548b-nmgsh status completed successfully
Resource deployment/mongodb-deployment status completed successfully
Resource deployment/nodetodo-deployment status updated to In Progress
Resource pod/nodetodo-deployment-5dcbd4795b-94sqb status completed successfully

So pod/nodetodo-deployment-5dcbd4795b-94sqb appeared to be running. But kubectl disagrees:

$ kubectl get all
NAME                                       READY   STATUS    RESTARTS   AGE
pod/mongodb-deployment-587b66548b-nmgsh    1/1     Running   0          92s
pod/nodetodo-deployment-5dcbd4795b-94sqb   0/1     Running   0          92s

NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes          ClusterIP      10.96.0.1        <none>        443/TCP        16d
service/mongodb-service     ClusterIP      10.98.37.129     <none>        27017/TCP      92s
service/nodetodo-external   LoadBalancer   10.104.156.208   <pending>     80:32354/TCP   92s

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb-deployment    1/1     1            1           92s
deployment.apps/nodetodo-deployment   0/1     1            0           92s

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-deployment-587b66548b    1         1         1       92s
replicaset.apps/nodetodo-deployment-5dcbd4795b   1         1         0       92s

$ kubectl describe pod/nodetodo-deployment-5dcbd4795b-94sqb
...
  Warning  Unhealthy  32s (x4 over 4m38s)  kubelet            Readiness probe failed: Get "http://172.17.0.5:3000/todo": EOF
  Warning  Unhealthy  32s (x3 over 4m32s)  kubelet            Liveness probe failed: Get "http://172.17.0.5:3000/todo": EOF
  Normal   Killing    32s                  kubelet            Container nodetodo failed liveness probe, will be restarted
  Warning  Unhealthy  32s                  kubelet            Readiness probe failed: Get "http://172.17.0.5:3000/todo": dial tcp 172.17.0.5:3000: connect: connection refused

Eventually the output view added:

Resource pod/nodetodo-deployment-5dcbd4795b-94sqb status failed with Readiness probe failed: Get "http://172.17.0.5:3000/todo": EOF

I suspect our status checks are reporting the pod phase and ignoring the ready state — the same problem as in #5308.

(This caused some head-scratching trying to understand why the app wasn't running properly.)

@briandealwis briandealwis added kind/bug Something isn't working priority/p1 High impact feature/bug. area/status-check labels Feb 20, 2021
@tejal29 tejal29 added the kind/friction Issues causing user pain that do not have a workaround label Feb 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/status-check kind/bug Something isn't working kind/friction Issues causing user pain that do not have a workaround priority/p1 High impact feature/bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants