Skip to content

Preventing Kaniko log loss #2152

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

prary
Copy link
Contributor

@prary prary commented May 19, 2019

Adding PodGracePeriodSecond for kaniko pod, to ensure all the logs are stream before it dies.

@nkubala
Copy link
Contributor

nkubala commented May 20, 2019

@prary thanks for taking a stab at this, looks like you've got some build errors in your PR though. have a look and ping one of us when you have the build fixed!

Name: "logger-container",
Image: constants.DefaultBusyboxImage,
ImagePullPolicy: v1.PullIfNotPresent,
Command: []string{"sh", "-c", "while [[ $(ps -ef | grep kaniko | wc -l) -gt 1 ]] ; do sleep 1; done; sleep " + clusterDetails.PodGracePeriodSeconds},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm i am a little bit concerned with adding a sleep here. I think the problem you are trying to solve here is when

  • kaniko pod dies and gets cleaned up by kube admin before we could steam logs?

is that correct?
Adding a grace time would not still able to fix this issue what if still dont end up reading them within the grace period.
I think the right approach would be kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes. https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-sidecar-container-with-the-logging-agent

But again, thats a big change. Not sure what is the correct thing here.

Copy link
Contributor Author

@prary prary May 22, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kaniko pod dies and gets cleaned up by kube admin before we could steam logs?

Exactly

Adding a grace time would not still able to fix this issue what if still dont end up reading them within the grace period.

User can simply increase the grace period which is user configurable.

I think the right approach would be kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes.

Yes that could be a big change and is one of the possible solution. We can also make changes in kaniko for storing logs in some volume. Or maybe something even better.

kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes.

How we would make sure if all the logs are fetched or still some logs are streaming.

@tejal29 tejal29 added the !! blocked !! this issue/PR is blocked by another issue label May 20, 2019
@tejal29
Copy link
Contributor

tejal29 commented May 20, 2019

This is blocked until #2083 is in!

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and have the pull request author add another comment and the bot will run again. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@tejal29 tejal29 force-pushed the kaniko_grace_period branch from ec84742 to 54b7f17 Compare July 9, 2019 23:09
@tejal29 tejal29 removed the cla: no label Jul 9, 2019
@prary
Copy link
Contributor Author

prary commented Aug 17, 2019

#2352 solves the log leakage problem hence closing it.

@prary prary closed this Aug 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
!! blocked !! this issue/PR is blocked by another issue kind/design discussion
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants