-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Skaffold dev uses older deployment pod state as new deployment error #4947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Skaffold fetches pods/services based on the |
I'm experiencing the same behavior with When skaffold tries to update one of my pods the deployment fails with following error message:
My whole application becomes unresponsive at this point. Information
|
I saw this same behaviour with |
I'm seeing the same behavior in 1.20.0. has anyone come across any workarounds? Currently the only thing I can do is kill skaffold and relaunch at which point the deployment will be detected as stabilized (once the new pod is spun up). Defeats the purpose of the "dev" flag. |
similar issue, but the problem is that the old deployment starved my CPU resource because the old deployment doesn't get pruned when the skaffold is trying to create the new one. |
@pot-code just to be clear, the deployment management is performed by Kubernetes, not Skaffold. It sounds like you should look at the Deployment |
There have been a number of improvements to Skaffold's status checking since this issue was first opened. In particular, Skaffold changed its default status check timeout in v1.18.0 to 10 minutes to match Kubernetes' default (#5247). I'm going to close this issue: if you're seeing errors relating to redeploys then please open a new issue with details to reproduce. |
This is still an issue with skaffold v1.25.0. I'll see if I can find time to create a new issue. |
This seems to be fixed in skaffold v1.27.0. Thank you. ❤️ |
Expected behavior
While
skaffold dev
is running, a new deployment should not be marked as failed if a previous deployment is in backoff state or exits with an error when terminated.Actual behavior
As seen below,
pod/my-service-v1-6d576c8f74-48qhc
is the new pod created by the new deployment cycle where the two previous attempts are in backoff mode. The older deployments are terminated by Kubernetes as soon as the new deployment enters the running state which can be seen in the kubectl output below.Information
The text was updated successfully, but these errors were encountered: