-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Preventing Kaniko log loss #2152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@prary thanks for taking a stab at this, looks like you've got some build errors in your PR though. have a look and ping one of us when you have the build fixed! |
Name: "logger-container", | ||
Image: constants.DefaultBusyboxImage, | ||
ImagePullPolicy: v1.PullIfNotPresent, | ||
Command: []string{"sh", "-c", "while [[ $(ps -ef | grep kaniko | wc -l) -gt 1 ]] ; do sleep 1; done; sleep " + clusterDetails.PodGracePeriodSeconds}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm i am a little bit concerned with adding a sleep here. I think the problem you are trying to solve here is when
- kaniko pod dies and gets cleaned up by kube admin before we could steam logs?
is that correct?
Adding a grace time would not still able to fix this issue what if still dont end up reading them within the grace period.
I think the right approach would be kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes. https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-sidecar-container-with-the-logging-agent
But again, thats a big change. Not sure what is the correct thing here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kaniko pod dies and gets cleaned up by kube admin before we could steam logs?
Exactly
Adding a grace time would not still able to fix this issue what if still dont end up reading them within the grace period.
User can simply increase the grace period which is user configurable.
I think the right approach would be kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes.
Yes that could be a big change and is one of the possible solution. We can also make changes in kaniko for storing logs in some volume. Or maybe something even better.
kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes.
How we would make sure if all the logs are fetched or still some logs are streaming.
This is blocked until #2083 is in! |
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. ℹ️ Googlers: Go here for more info. |
ec84742
to
54b7f17
Compare
#2352 solves the log leakage problem hence closing it. |
Adding PodGracePeriodSecond for kaniko pod, to ensure all the logs are stream before it dies.