-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Fix missing logs when kaniko exists immediately #2352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels strange to duplicate the pods.GetLogs
part but it does the job.
It doesn't look like this fix worked. I wasn't able to get any logs running v1.1.0. See below:
The only way to view the logs was to really quickly run kubectl get logs while the pods was still running. Even stackdriver didn't have the logs or I just don't know how to use the correct filter :) A way to reproduce is to create a secret with incorrect service account that causes |
Running into the same issue as @samos123 (or at least I think I am). There is no output - unless kaniko isnt producing any? |
Kaniko in my case was producing logs because I saw them by quickly running kubectl logs but it was very hard to time it exactly right and see the logs. |
I assume #3238 reintroduced this. A quick fix would be to call |
This is my proposed fix for #1978 (as described in #2083)
@prary @priyawadhwa @tejal29